From: Geert Uytterhoeven geert+renesas@glider.be
[ Upstream commit 45f5d5a9e34d3fe4140a9a3b5f7ebe86c252440a ]
Currently there are two nodes named "regulator1" in the Draak DTS: a 3.3V regulator for the eMMC and the LVDS decoder, and a 12V regulator for the backlight. This causes the former to be overwritten by the latter.
Fix this by renaming all regulators with numerical suffixes to use named suffixes, which are less likely to conflict.
Fixes: 4fbd4158fe8967e9 ("arm64: dts: renesas: r8a77995: draak: Add backlight") Signed-off-by: Geert Uytterhoeven geert+renesas@glider.be Signed-off-by: Simon Horman horms+renesas@verge.net.au Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/renesas/r8a77995-draak.dts | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/boot/dts/renesas/r8a77995-draak.dts b/arch/arm64/boot/dts/renesas/r8a77995-draak.dts index a7dc11e36fd9d..071f66d8719e7 100644 --- a/arch/arm64/boot/dts/renesas/r8a77995-draak.dts +++ b/arch/arm64/boot/dts/renesas/r8a77995-draak.dts @@ -97,7 +97,7 @@ reg = <0x0 0x48000000 0x0 0x18000000>; };
- reg_1p8v: regulator0 { + reg_1p8v: regulator-1p8v { compatible = "regulator-fixed"; regulator-name = "fixed-1.8V"; regulator-min-microvolt = <1800000>; @@ -106,7 +106,7 @@ regulator-always-on; };
- reg_3p3v: regulator1 { + reg_3p3v: regulator-3p3v { compatible = "regulator-fixed"; regulator-name = "fixed-3.3V"; regulator-min-microvolt = <3300000>; @@ -115,7 +115,7 @@ regulator-always-on; };
- reg_12p0v: regulator1 { + reg_12p0v: regulator-12p0v { compatible = "regulator-fixed"; regulator-name = "D12.0V"; regulator-min-microvolt = <12000000>;
From: Wenwen Wang wenwen@cs.uga.edu
[ Upstream commit 2c231c0c1dec42192aca0f87f2dc68b8f0cbc7d2 ]
In ti_dra7_xbar_probe(), 'rsv_events' is allocated through kcalloc(). Then of_property_read_u32_array() is invoked to search for the property. However, if this process fails, 'rsv_events' is not deallocated, leading to a memory leak bug. To fix this issue, free 'rsv_events' before returning the error.
Signed-off-by: Wenwen Wang wenwen@cs.uga.edu Acked-by: Peter Ujfalusi peter.ujfalusi@ti.com Link: https://lore.kernel.org/r/1565938136-7249-1-git-send-email-wenwen@cs.uga.edu Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/ti/dma-crossbar.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/ti/dma-crossbar.c b/drivers/dma/ti/dma-crossbar.c index ad2f0a4cd6a4d..f255056696eec 100644 --- a/drivers/dma/ti/dma-crossbar.c +++ b/drivers/dma/ti/dma-crossbar.c @@ -391,8 +391,10 @@ static int ti_dra7_xbar_probe(struct platform_device *pdev)
ret = of_property_read_u32_array(node, pname, (u32 *)rsv_events, nelm * 2); - if (ret) + if (ret) { + kfree(rsv_events); return ret; + }
for (i = 0; i < nelm; i++) { ti_dra7_xbar_reserve(rsv_events[i][0], rsv_events[i][1],
From: Wenwen Wang wenwen@cs.uga.edu
[ Upstream commit 962411b05a6d3342aa649e39cda1704c1fc042c6 ]
If devm_request_irq() fails to disable all interrupts, no cleanup is performed before retuning the error. To fix this issue, invoke omap_dma_free() to do the cleanup.
Signed-off-by: Wenwen Wang wenwen@cs.uga.edu Acked-by: Peter Ujfalusi peter.ujfalusi@ti.com Link: https://lore.kernel.org/r/1565938570-7528-1-git-send-email-wenwen@cs.uga.edu Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/ti/omap-dma.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/ti/omap-dma.c b/drivers/dma/ti/omap-dma.c index ba27802efcd0a..d07c0d5de7a25 100644 --- a/drivers/dma/ti/omap-dma.c +++ b/drivers/dma/ti/omap-dma.c @@ -1540,8 +1540,10 @@ static int omap_dma_probe(struct platform_device *pdev)
rc = devm_request_irq(&pdev->dev, irq, omap_dma_irq, IRQF_SHARED, "omap-dma-engine", od); - if (rc) + if (rc) { + omap_dma_free(od); return rc; + } }
if (omap_dma_glbl_read(od, CAPS_0) & CAPS_0_SUPPORT_LL123)
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit 9b8bd476e78e89c9ea26c3b435ad0201c3d7dbf5 ]
Identical to __put_user(); the __get_user() argument evalution will too leak UBSAN crud into the __uaccess_begin() / __uaccess_end() region. While uncommon this was observed to happen for:
drivers/xen/gntdev.c: if (__get_user(old_status, batch->status[i]))
where UBSAN added array bound checking.
This complements commit:
6ae865615fc4 ("x86/uaccess: Dont leak the AC flag into __put_user() argument evaluation")
Tested-by Sedat Dilek sedat.dilek@gmail.com Reported-by: Randy Dunlap rdunlap@infradead.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Josh Poimboeuf jpoimboe@redhat.com Reviewed-by: Thomas Gleixner tglx@linutronix.de Cc: broonie@kernel.org Cc: sfr@canb.auug.org.au Cc: akpm@linux-foundation.org Cc: Randy Dunlap rdunlap@infradead.org Cc: mhocko@suse.cz Cc: Josh Poimboeuf jpoimboe@redhat.com Link: https://lkml.kernel.org/r/20190829082445.GM2369@hirez.programming.kicks-ass.... Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/include/asm/uaccess.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index c82abd6e4ca39..869794bd0fd98 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -442,8 +442,10 @@ __pu_label: \ ({ \ int __gu_err; \ __inttype(*(ptr)) __gu_val; \ + __typeof__(ptr) __gu_ptr = (ptr); \ + __typeof__(size) __gu_size = (size); \ __uaccess_begin_nospec(); \ - __get_user_size(__gu_val, (ptr), (size), __gu_err, -EFAULT); \ + __get_user_size(__gu_val, __gu_ptr, __gu_size, __gu_err, -EFAULT); \ __uaccess_end(); \ (x) = (__force __typeof__(*(ptr)))__gu_val; \ __builtin_expect(__gu_err, 0); \
From: Tianyu Lan Tianyu.Lan@microsoft.com
[ Upstream commit 4030b4c585c41eeefec7bd20ce3d0e100a0f2e4d ]
When the 'start' parameter is >= 0xFF000000 on 32-bit systems, or >= 0xFFFFFFFF'FF000000 on 64-bit systems, fill_gva_list() gets into an infinite loop.
With such inputs, 'cur' overflows after adding HV_TLB_FLUSH_UNIT and always compares as less than end. Memory is filled with guest virtual addresses until the system crashes.
Fix this by never incrementing 'cur' to be larger than 'end'.
Reported-by: Jong Hyun Park park.jonghyun@yonsei.ac.kr Signed-off-by: Tianyu Lan Tianyu.Lan@microsoft.com Reviewed-by: Michael Kelley mikelley@microsoft.com Cc: Borislav Petkov bp@alien8.de Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Fixes: 2ffd9e33ce4a ("x86/hyper-v: Use hypercall for remote TLB flush") Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/hyperv/mmu.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c index e65d7fe6489f3..5208ba49c89a9 100644 --- a/arch/x86/hyperv/mmu.c +++ b/arch/x86/hyperv/mmu.c @@ -37,12 +37,14 @@ static inline int fill_gva_list(u64 gva_list[], int offset, * Lower 12 bits encode the number of additional * pages to flush (in addition to the 'cur' page). */ - if (diff >= HV_TLB_FLUSH_UNIT) + if (diff >= HV_TLB_FLUSH_UNIT) { gva_list[gva_n] |= ~PAGE_MASK; - else if (diff) + cur += HV_TLB_FLUSH_UNIT; + } else if (diff) { gva_list[gva_n] |= (diff - 1) >> PAGE_SHIFT; + cur = end; + }
- cur += HV_TLB_FLUSH_UNIT; gva_n++;
} while (cur < end);
From: Al Viro viro@zeniv.linux.org.uk
[ Upstream commit f19e4ed1e1edbfa3c9ccb9fed17759b7d6db24c6 ]
revert cc57c07343bd "configfs: fix registered group removal" It was an attempt to handle something that fundamentally doesn't work - configfs_register_group() should never be done in a part of tree that can be rmdir'ed. And in mainline it never had been, so let's not borrow trouble; the fix was racy anyway, it would take a lot more to make that work and desired semantics is not clear.
Signed-off-by: Al Viro viro@zeniv.linux.org.uk Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- fs/configfs/dir.c | 11 ----------- 1 file changed, 11 deletions(-)
diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c index d2ca5287762d8..7d8d16da40314 100644 --- a/fs/configfs/dir.c +++ b/fs/configfs/dir.c @@ -1770,16 +1770,6 @@ void configfs_unregister_group(struct config_group *group) struct dentry *dentry = group->cg_item.ci_dentry; struct dentry *parent = group->cg_item.ci_parent->ci_dentry;
- mutex_lock(&subsys->su_mutex); - if (!group->cg_item.ci_parent->ci_group) { - /* - * The parent has already been unlinked and detached - * due to a rmdir. - */ - goto unlink_group; - } - mutex_unlock(&subsys->su_mutex); - inode_lock_nested(d_inode(parent), I_MUTEX_PARENT); spin_lock(&configfs_dirent_lock); configfs_detach_prep(dentry, NULL); @@ -1794,7 +1784,6 @@ void configfs_unregister_group(struct config_group *group) dput(dentry);
mutex_lock(&subsys->su_mutex); -unlink_group: unlink_group(group); mutex_unlock(&subsys->su_mutex); }
Please stop selectively backporting parts of random series. We'll need to the full series from Al in -stable instead.
On Tue, Sep 10, 2019 at 08:01:35AM +0200, Christoph Hellwig wrote:
Please stop selectively backporting parts of random series. We'll need to the full series from Al in -stable instead.
Yeah, I'll pick this whole series up soon. Sasha, can you drop this from your queues please?
thanks,
greg k-h
On Tue, Sep 10, 2019 at 10:19:38AM +0100, Greg KH wrote:
On Tue, Sep 10, 2019 at 08:01:35AM +0200, Christoph Hellwig wrote:
Please stop selectively backporting parts of random series. We'll need to the full series from Al in -stable instead.
Yeah, I'll pick this whole series up soon. Sasha, can you drop this from your queues please?
Yup.
-- Thanks, Sasha
From: Jacob Pan jacob.jun.pan@linux.intel.com
[ Upstream commit 8744daf4b0699b724ee0a56b313a6c0c4ea289e3 ]
Global pages support is removed from VT-d spec 3.0. Since global pages G flag only affects first-level paging structures and because DMA request with PASID are only supported by VT-d spec. 3.0 and onward, we can safely remove global pages support.
For kernel shared virtual address IOTLB invalidation, PASID granularity and page selective within PASID will be used. There is no global granularity supported. Without this fix, IOTLB invalidation will cause invalid descriptor error in the queued invalidation (QI) interface.
Fixes: 1c4f88b7f1f9 ("iommu/vt-d: Shared virtual address in scalable mode") Reported-by: Sanjay K Kumar sanjay.k.kumar@intel.com Signed-off-by: Jacob Pan jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/intel-svm.c | 36 +++++++++++++++--------------------- include/linux/intel-iommu.h | 3 --- 2 files changed, 15 insertions(+), 24 deletions(-)
diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c index eceaa7e968ae8..641dc223c97b8 100644 --- a/drivers/iommu/intel-svm.c +++ b/drivers/iommu/intel-svm.c @@ -100,24 +100,19 @@ int intel_svm_finish_prq(struct intel_iommu *iommu) }
static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_dev *sdev, - unsigned long address, unsigned long pages, int ih, int gl) + unsigned long address, unsigned long pages, int ih) { struct qi_desc desc;
- if (pages == -1) { - /* For global kernel pages we have to flush them in *all* PASIDs - * because that's the only option the hardware gives us. Despite - * the fact that they are actually only accessible through one. */ - if (gl) - desc.qw0 = QI_EIOTLB_PASID(svm->pasid) | - QI_EIOTLB_DID(sdev->did) | - QI_EIOTLB_GRAN(QI_GRAN_ALL_ALL) | - QI_EIOTLB_TYPE; - else - desc.qw0 = QI_EIOTLB_PASID(svm->pasid) | - QI_EIOTLB_DID(sdev->did) | - QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) | - QI_EIOTLB_TYPE; + /* + * Do PASID granu IOTLB invalidation if page selective capability is + * not available. + */ + if (pages == -1 || !cap_pgsel_inv(svm->iommu->cap)) { + desc.qw0 = QI_EIOTLB_PASID(svm->pasid) | + QI_EIOTLB_DID(sdev->did) | + QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) | + QI_EIOTLB_TYPE; desc.qw1 = 0; } else { int mask = ilog2(__roundup_pow_of_two(pages)); @@ -127,7 +122,6 @@ static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_d QI_EIOTLB_GRAN(QI_GRAN_PSI_PASID) | QI_EIOTLB_TYPE; desc.qw1 = QI_EIOTLB_ADDR(address) | - QI_EIOTLB_GL(gl) | QI_EIOTLB_IH(ih) | QI_EIOTLB_AM(mask); } @@ -162,13 +156,13 @@ static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_d }
static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address, - unsigned long pages, int ih, int gl) + unsigned long pages, int ih) { struct intel_svm_dev *sdev;
rcu_read_lock(); list_for_each_entry_rcu(sdev, &svm->devs, list) - intel_flush_svm_range_dev(svm, sdev, address, pages, ih, gl); + intel_flush_svm_range_dev(svm, sdev, address, pages, ih); rcu_read_unlock(); }
@@ -180,7 +174,7 @@ static void intel_invalidate_range(struct mmu_notifier *mn, struct intel_svm *svm = container_of(mn, struct intel_svm, notifier);
intel_flush_svm_range(svm, start, - (end - start + PAGE_SIZE - 1) >> VTD_PAGE_SHIFT, 0, 0); + (end - start + PAGE_SIZE - 1) >> VTD_PAGE_SHIFT, 0); }
static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) @@ -203,7 +197,7 @@ static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) rcu_read_lock(); list_for_each_entry_rcu(sdev, &svm->devs, list) { intel_pasid_tear_down_entry(svm->iommu, sdev->dev, svm->pasid); - intel_flush_svm_range_dev(svm, sdev, 0, -1, 0, !svm->mm); + intel_flush_svm_range_dev(svm, sdev, 0, -1, 0); } rcu_read_unlock();
@@ -410,7 +404,7 @@ int intel_svm_unbind_mm(struct device *dev, int pasid) * large and has to be physically contiguous. So it's * hard to be as defensive as we might like. */ intel_pasid_tear_down_entry(iommu, dev, svm->pasid); - intel_flush_svm_range_dev(svm, sdev, 0, -1, 0, !svm->mm); + intel_flush_svm_range_dev(svm, sdev, 0, -1, 0); kfree_rcu(sdev, rcu);
if (list_empty(&svm->devs)) { diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h index 6a8dd4af01472..ba8dc520cc79a 100644 --- a/include/linux/intel-iommu.h +++ b/include/linux/intel-iommu.h @@ -346,7 +346,6 @@ enum { #define QI_PC_PASID_SEL (QI_PC_TYPE | QI_PC_GRAN(1))
#define QI_EIOTLB_ADDR(addr) ((u64)(addr) & VTD_PAGE_MASK) -#define QI_EIOTLB_GL(gl) (((u64)gl) << 7) #define QI_EIOTLB_IH(ih) (((u64)ih) << 6) #define QI_EIOTLB_AM(am) (((u64)am)) #define QI_EIOTLB_PASID(pasid) (((u64)pasid) << 32) @@ -378,8 +377,6 @@ enum { #define QI_RESP_INVALID 0x1 #define QI_RESP_FAILURE 0xf
-#define QI_GRAN_ALL_ALL 0 -#define QI_GRAN_NONG_ALL 1 #define QI_GRAN_NONG_PASID 2 #define QI_GRAN_PSI_PASID 3
From: Baolin Wang baolin.wang@linaro.org
[ Upstream commit 689379c2f383b1fdfdff03e84cf659daf62f2088 ]
For the Spreadtrum DMA link-list mode, when the DMA engine got a slave hardware request, which will trigger the DMA engine to load the DMA configuration from the link-list memory automatically. But before the slave hardware request, the slave will get an incorrect residue due to the first node used to trigger the link-list was configured as the last source address and destination address.
Thus we should make sure the first node was configured the start source address and destination address, which can fix this issue.
Fixes: 4ac695464763 ("dmaengine: sprd: Support DMA link-list mode") Signed-off-by: Baolin Wang baolin.wang@linaro.org Link: https://lore.kernel.org/r/77868edb7aff9d5cb12ac3af8827ef2e244441a6.156715047... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/sprd-dma.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c index baac476c86224..525dc7338fe3b 100644 --- a/drivers/dma/sprd-dma.c +++ b/drivers/dma/sprd-dma.c @@ -908,6 +908,7 @@ sprd_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); struct dma_slave_config *slave_cfg = &schan->slave_cfg; dma_addr_t src = 0, dst = 0; + dma_addr_t start_src = 0, start_dst = 0; struct sprd_dma_desc *sdesc; struct scatterlist *sg; u32 len = 0; @@ -954,6 +955,11 @@ sprd_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, dst = sg_dma_address(sg); }
+ if (!i) { + start_src = src; + start_dst = dst; + } + /* * The link-list mode needs at least 2 link-list * configurations. If there is only one sg, it doesn't @@ -970,8 +976,8 @@ sprd_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, } }
- ret = sprd_dma_fill_desc(chan, &sdesc->chn_hw, 0, 0, src, dst, len, - dir, flags, slave_cfg); + ret = sprd_dma_fill_desc(chan, &sdesc->chn_hw, 0, 0, start_src, + start_dst, len, dir, flags, slave_cfg); if (ret) { kfree(sdesc); return NULL;
From: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com
[ Upstream commit cf24aac38698bfa1d021afd3883df3c4c65143a4 ]
The commit 20c169aceb45 ("dmaengine: rcar-dmac: clear pertinence number of channels") forgets to clear the last channel by DMACHCLR in rcar_dmac_init() (and doesn't need to clear the first channel) if iommu is mapped to the device. So, this patch fixes it by using "channels_mask" bitfield.
Note that the hardware and driver don't support more than 32 bits in DMACHCLR register anyway, so this patch should reject more than 32 channels in rcar_dmac_parse_of().
Fixes: 20c169aceb459575 ("dmaengine: rcar-dmac: clear pertinence number of channels") Signed-off-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Reviewed-by: Simon Horman horms+renesas@verge.net.au Reviewed-by: Geert Uytterhoeven geert+renesas@glider.be Link: https://lore.kernel.org/r/1567424643-26629-1-git-send-email-yoshihiro.shimod... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/sh/rcar-dmac.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-)
diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c index 54de669c38b84..f1d89bdebddab 100644 --- a/drivers/dma/sh/rcar-dmac.c +++ b/drivers/dma/sh/rcar-dmac.c @@ -192,6 +192,7 @@ struct rcar_dmac_chan { * @iomem: remapped I/O memory base * @n_channels: number of available channels * @channels: array of DMAC channels + * @channels_mask: bitfield of which DMA channels are managed by this driver * @modules: bitmask of client modules in use */ struct rcar_dmac { @@ -202,6 +203,7 @@ struct rcar_dmac {
unsigned int n_channels; struct rcar_dmac_chan *channels; + unsigned int channels_mask;
DECLARE_BITMAP(modules, 256); }; @@ -438,7 +440,7 @@ static int rcar_dmac_init(struct rcar_dmac *dmac) u16 dmaor;
/* Clear all channels and enable the DMAC globally. */ - rcar_dmac_write(dmac, RCAR_DMACHCLR, GENMASK(dmac->n_channels - 1, 0)); + rcar_dmac_write(dmac, RCAR_DMACHCLR, dmac->channels_mask); rcar_dmac_write(dmac, RCAR_DMAOR, RCAR_DMAOR_PRI_FIXED | RCAR_DMAOR_DME);
@@ -814,6 +816,9 @@ static void rcar_dmac_stop_all_chan(struct rcar_dmac *dmac) for (i = 0; i < dmac->n_channels; ++i) { struct rcar_dmac_chan *chan = &dmac->channels[i];
+ if (!(dmac->channels_mask & BIT(i))) + continue; + /* Stop and reinitialize the channel. */ spin_lock_irq(&chan->lock); rcar_dmac_chan_halt(chan); @@ -1776,6 +1781,8 @@ static int rcar_dmac_chan_probe(struct rcar_dmac *dmac, return 0; }
+#define RCAR_DMAC_MAX_CHANNELS 32 + static int rcar_dmac_parse_of(struct device *dev, struct rcar_dmac *dmac) { struct device_node *np = dev->of_node; @@ -1787,12 +1794,16 @@ static int rcar_dmac_parse_of(struct device *dev, struct rcar_dmac *dmac) return ret; }
- if (dmac->n_channels <= 0 || dmac->n_channels >= 100) { + /* The hardware and driver don't support more than 32 bits in CHCLR */ + if (dmac->n_channels <= 0 || + dmac->n_channels >= RCAR_DMAC_MAX_CHANNELS) { dev_err(dev, "invalid number of channels %u\n", dmac->n_channels); return -EINVAL; }
+ dmac->channels_mask = GENMASK(dmac->n_channels - 1, 0); + return 0; }
@@ -1802,7 +1813,6 @@ static int rcar_dmac_probe(struct platform_device *pdev) DMA_SLAVE_BUSWIDTH_2_BYTES | DMA_SLAVE_BUSWIDTH_4_BYTES | DMA_SLAVE_BUSWIDTH_8_BYTES | DMA_SLAVE_BUSWIDTH_16_BYTES | DMA_SLAVE_BUSWIDTH_32_BYTES | DMA_SLAVE_BUSWIDTH_64_BYTES; - unsigned int channels_offset = 0; struct dma_device *engine; struct rcar_dmac *dmac; struct resource *mem; @@ -1831,10 +1841,8 @@ static int rcar_dmac_probe(struct platform_device *pdev) * level we can't disable it selectively, so ignore channel 0 for now if * the device is part of an IOMMU group. */ - if (device_iommu_mapped(&pdev->dev)) { - dmac->n_channels--; - channels_offset = 1; - } + if (device_iommu_mapped(&pdev->dev)) + dmac->channels_mask &= ~BIT(0);
dmac->channels = devm_kcalloc(&pdev->dev, dmac->n_channels, sizeof(*dmac->channels), GFP_KERNEL); @@ -1892,8 +1900,10 @@ static int rcar_dmac_probe(struct platform_device *pdev) INIT_LIST_HEAD(&engine->channels);
for (i = 0; i < dmac->n_channels; ++i) { - ret = rcar_dmac_chan_probe(dmac, &dmac->channels[i], - i + channels_offset); + if (!(dmac->channels_mask & BIT(i))) + continue; + + ret = rcar_dmac_chan_probe(dmac, &dmac->channels[i], i); if (ret < 0) goto error; }
From: Hillf Danton hdanton@sina.com
[ Upstream commit d41a3effbb53b1bcea41e328d16a4d046a508381 ]
If a request_key authentication token key gets revoked, there's a window in which request_key_auth_describe() can see it with a NULL payload - but it makes no check for this and something like the following oops may occur:
BUG: Kernel NULL pointer dereference at 0x00000038 Faulting instruction address: 0xc0000000004ddf30 Oops: Kernel access of bad area, sig: 11 [#1] ... NIP [...] request_key_auth_describe+0x90/0xd0 LR [...] request_key_auth_describe+0x54/0xd0 Call Trace: [...] request_key_auth_describe+0x54/0xd0 (unreliable) [...] proc_keys_show+0x308/0x4c0 [...] seq_read+0x3d0/0x540 [...] proc_reg_read+0x90/0x110 [...] __vfs_read+0x3c/0x70 [...] vfs_read+0xb4/0x1b0 [...] ksys_read+0x7c/0x130 [...] system_call+0x5c/0x70
Fix this by checking for a NULL pointer when describing such a key.
Also make the read routine check for a NULL pointer to be on the safe side.
[DH: Modified to not take already-held rcu lock and modified to also check in the read routine]
Fixes: 04c567d9313e ("[PATCH] Keys: Fix race between two instantiators of a key") Reported-by: Sachin Sant sachinp@linux.vnet.ibm.com Signed-off-by: Hillf Danton hdanton@sina.com Signed-off-by: David Howells dhowells@redhat.com Tested-by: Sachin Sant sachinp@linux.vnet.ibm.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- security/keys/request_key_auth.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/security/keys/request_key_auth.c b/security/keys/request_key_auth.c index e45b5cf3b97fd..8491becb57270 100644 --- a/security/keys/request_key_auth.c +++ b/security/keys/request_key_auth.c @@ -66,6 +66,9 @@ static void request_key_auth_describe(const struct key *key, { struct request_key_auth *rka = get_request_key_auth(key);
+ if (!rka) + return; + seq_puts(m, "key:"); seq_puts(m, key->description); if (key_is_positive(key)) @@ -83,6 +86,9 @@ static long request_key_auth_read(const struct key *key, size_t datalen; long ret;
+ if (!rka) + return -EKEYREVOKED; + datalen = rka->callout_len; ret = datalen;
From: Stuart Hayes stuart.w.hayes@gmail.com
[ Upstream commit 36b7200f67dfe75b416b5281ed4ace9927b513bc ]
When devices are attached to the amd_iommu in a kdump kernel, the old device table entries (DTEs), which were copied from the crashed kernel, will be overwritten with a new domain number. When the new DTE is written, the IOMMU is told to flush the DTE from its internal cache--but it is not told to flush the translation cache entries for the old domain number.
Without this patch, AMD systems using the tg3 network driver fail when kdump tries to save the vmcore to a network system, showing network timeouts and (sometimes) IOMMU errors in the kernel log.
This patch will flush IOMMU translation cache entries for the old domain when a DTE gets overwritten with a new domain number.
Signed-off-by: Stuart Hayes stuart.w.hayes@gmail.com Fixes: 3ac3e5ee5ed5 ('iommu/amd: Copy old trans table from old kernel') Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/amd_iommu.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index dce1d8d2e8a44..b265062edf6c8 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -1143,6 +1143,17 @@ static void amd_iommu_flush_tlb_all(struct amd_iommu *iommu) iommu_completion_wait(iommu); }
+static void amd_iommu_flush_tlb_domid(struct amd_iommu *iommu, u32 dom_id) +{ + struct iommu_cmd cmd; + + build_inv_iommu_pages(&cmd, 0, CMD_INV_IOMMU_ALL_PAGES_ADDRESS, + dom_id, 1); + iommu_queue_command(iommu, &cmd); + + iommu_completion_wait(iommu); +} + static void amd_iommu_flush_all(struct amd_iommu *iommu) { struct iommu_cmd cmd; @@ -1863,6 +1874,7 @@ static void set_dte_entry(u16 devid, struct protection_domain *domain, { u64 pte_root = 0; u64 flags = 0; + u32 old_domid;
if (domain->mode != PAGE_MODE_NONE) pte_root = iommu_virt_to_phys(domain->pt_root); @@ -1912,8 +1924,20 @@ static void set_dte_entry(u16 devid, struct protection_domain *domain, flags &= ~DEV_DOMID_MASK; flags |= domain->id;
+ old_domid = amd_iommu_dev_table[devid].data[1] & DEV_DOMID_MASK; amd_iommu_dev_table[devid].data[1] = flags; amd_iommu_dev_table[devid].data[0] = pte_root; + + /* + * A kdump kernel might be replacing a domain ID that was copied from + * the previous kernel--if so, it needs to flush the translation cache + * entries for the old domain ID that is being overwritten + */ + if (old_domid) { + struct amd_iommu *iommu = amd_iommu_rlookup_table[devid]; + + amd_iommu_flush_tlb_domid(iommu, old_domid); + } }
static void clear_dte_entry(u16 devid)
From: Joerg Roedel jroedel@suse.de
[ Upstream commit 754265bcab78a9014f0f99cd35e0d610fcd7dfa7 ]
After the conversion to lock-less dma-api call the increase_address_space() function can be called without any locking. Multiple CPUs could potentially race for increasing the address space, leading to invalid domain->mode settings and invalid page-tables. This has been happening in the wild under high IO load and memory pressure.
Fix the race by locking this operation. The function is called infrequently so that this does not introduce a performance regression in the dma-api path again.
Reported-by: Qian Cai cai@lca.pw Fixes: 256e4621c21a ('iommu/amd: Make use of the generic IOVA allocator') Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/amd_iommu.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index b265062edf6c8..3e687f18b203a 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -1425,18 +1425,21 @@ static void free_pagetable(struct protection_domain *domain) * another level increases the size of the address space by 9 bits to a size up * to 64 bits. */ -static bool increase_address_space(struct protection_domain *domain, +static void increase_address_space(struct protection_domain *domain, gfp_t gfp) { + unsigned long flags; u64 *pte;
- if (domain->mode == PAGE_MODE_6_LEVEL) + spin_lock_irqsave(&domain->lock, flags); + + if (WARN_ON_ONCE(domain->mode == PAGE_MODE_6_LEVEL)) /* address space already 64 bit large */ - return false; + goto out;
pte = (void *)get_zeroed_page(gfp); if (!pte) - return false; + goto out;
*pte = PM_LEVEL_PDE(domain->mode, iommu_virt_to_phys(domain->pt_root)); @@ -1444,7 +1447,10 @@ static bool increase_address_space(struct protection_domain *domain, domain->mode += 1; domain->updated = true;
- return true; +out: + spin_unlock_irqrestore(&domain->lock, flags); + + return; }
static u64 *alloc_pte(struct protection_domain *domain,
linux-stable-mirror@lists.linaro.org