Setting si_code to 0 is the same a setting si_code to SI_USER which is definitely
not correct. With si_code set to SI_USER si_pid and si_uid will be copied to
userspace instead of si_addr. Which is very wrong.
So fix this by using a sensible si_code (SEGV_MAPERR) for this failure.
Cc: stable(a)vger.kernel.org
Fixes: b920de1b77b7 ("mn10300: add the MN10300/AM33 architecture to the kernel")
Cc: David Howells <dhowells(a)redhat.com>
Cc: Masakazu Urade <urade.masakazu(a)jp.panasonic.com>
Cc: Koichi Yasutake <yasutake.koichi(a)jp.panasonic.com>
Signed-off-by: "Eric W. Biederman" <ebiederm(a)xmission.com>
---
arch/mn10300/mm/misalignment.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/mn10300/mm/misalignment.c b/arch/mn10300/mm/misalignment.c
index b39a388825ae..8ace89617c1c 100644
--- a/arch/mn10300/mm/misalignment.c
+++ b/arch/mn10300/mm/misalignment.c
@@ -437,7 +437,7 @@ asmlinkage void misalignment(struct pt_regs *regs, enum exception_code code)
info.si_signo = SIGSEGV;
info.si_errno = 0;
- info.si_code = 0;
+ info.si_code = SEGV_MAPERR;
info.si_addr = (void *) regs->pc;
force_sig_info(SIGSEGV, &info, current);
return;
--
2.14.1
While reviewing the signal sending on openrisc the do_unaligned_access
function stood out because it is obviously wrong. A comment about an
si_code set above when actually si_code is never set. Leading to a
random si_code being sent to userspace in the event of an unaligned
access.
Looking further SIGBUS BUS_ADRALN is the proper pair of signal and
si_code to send for an unaligned access. That is what other
architectures do and what is required by posix.
Given that do_unaligned_access is broken in a way that no one can be
relying on it on openrisc fix the code to just do the right thing.
Cc: stable(a)vger.kernel.org
Fixes: 769a8a96229e ("OpenRISC: Traps")
Cc: Jonas Bonn <jonas(a)southpole.se>
Cc: Stefan Kristiansson <stefan.kristiansson(a)saunalahti.fi>
Cc: Stafford Horne <shorne(a)gmail.com>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Cc: openrisc(a)lists.librecores.org
Signed-off-by: "Eric W. Biederman" <ebiederm(a)xmission.com>
---
arch/openrisc/kernel/traps.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/openrisc/kernel/traps.c b/arch/openrisc/kernel/traps.c
index 4085d72fa5ae..9e38dc66c9e4 100644
--- a/arch/openrisc/kernel/traps.c
+++ b/arch/openrisc/kernel/traps.c
@@ -266,12 +266,12 @@ asmlinkage void do_unaligned_access(struct pt_regs *regs, unsigned long address)
siginfo_t info;
if (user_mode(regs)) {
- /* Send a SIGSEGV */
- info.si_signo = SIGSEGV;
+ /* Send a SIGBUS */
+ info.si_signo = SIGBUS;
info.si_errno = 0;
- /* info.si_code has been set above */
- info.si_addr = (void *)address;
- force_sig_info(SIGSEGV, &info, current);
+ info.si_code = BUS_ADRALN;
+ info.si_addr = (void __user *)address;
+ force_sig_info(SIGBUS, &info, current);
} else {
printk("KERNEL: Unaligned Access 0x%.8lx\n", address);
show_registers(regs);
--
2.14.1
Starting from commit 041e4575f034 ("mtd: nand: handle ECC errors in
OOB"), nand_do_read_oob() (from the NAND core) did return 0 or a
negative error, and the MTD layer expected it.
However, the trend for the NAND layer is now to return an error or a
positive number of bitflips. Deciding which status to return to the user
belongs to the MTD layer.
Commit e47f68587b82 ("mtd: check for max_bitflips in mtd_read_oob()")
brought this logic to the mtd_read_oob() function while the return value
coming from nand_do_read_oob() (called by the ->_read_oob() hook) was
left unchanged.
Fixes: e47f68587b82 ("mtd: check for max_bitflips in mtd_read_oob()")
Cc: stable(a)vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal(a)free-electrons.com>
---
Changes since v2:
- Correctly return the maximum number of bitflips, not the number of
bitflips of the last chunk only.
Changes since v1:
- s/->ecc.read_oob() hook/->_read_oob() hook/ in the commit message
- Fixed the compilation issue
drivers/mtd/nand/nand_base.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c
index 469220065b8b..e4e39890a975 100644
--- a/drivers/mtd/nand/nand_base.c
+++ b/drivers/mtd/nand/nand_base.c
@@ -3794,6 +3794,7 @@ EXPORT_SYMBOL(nand_write_oob_syndrome);
static int nand_do_read_oob(struct mtd_info *mtd, loff_t from,
struct mtd_oob_ops *ops)
{
+ unsigned int max_bitflips = 0;
int page, realpage, chipnr;
struct nand_chip *chip = mtd_to_nand(mtd);
struct mtd_ecc_stats stats;
@@ -3855,6 +3856,8 @@ static int nand_do_read_oob(struct mtd_info *mtd, loff_t from,
if (!readlen)
break;
+ max_bitflips = max_t(unsigned int, max_bitflips, ret);
+
/* Increment page address */
realpage++;
@@ -3876,7 +3879,7 @@ static int nand_do_read_oob(struct mtd_info *mtd, loff_t from,
if (mtd->ecc_stats.failed - stats.failed)
return -EBADMSG;
- return mtd->ecc_stats.corrected - stats.corrected ? -EUCLEAN : 0;
+ return max_bitflips;
}
/**
--
2.11.0
The bounce buffer is gone from the MMC core, and now we found out
that there are some (crippled) i.MX boards out there that have broken
ADMA (cannot do scatter-gather), and broken PIO so they must use
SDMA. Closer examination shows a less significant slowdown also on
SDMA-only capable Laptop hosts.
SDMA sets down the number of segments to one, so that each segment
gets turned into a singular request that ping-pongs to the block
layer before the next request/segment is issued.
Apparently it happens a lot that the block layer send requests
that include a lot of physically discontigous segments. My guess
is that this phenomenon is coming from the file system.
These devices that cannot handle scatterlists in hardware can see
major benefits from a DMA-contigous bounce buffer.
This patch accumulates those fragmented scatterlists in a physically
contigous bounce buffer so that we can issue bigger DMA data chunks
to/from the card.
When tested with thise PCI-integrated host (1217:8221) that
only supports SDMA:
0b:00.0 SD Host controller: O2 Micro, Inc. OZ600FJ0/OZ900FJ0/OZ600FJS
SD/MMC Card Reader Controller (rev 05)
This patch gave ~1Mbyte/s improved throughput on large reads and
writes when testing using iozone than without the patch.
On the i.MX SDHCI controllers on the crippled i.MX 25 and i.MX 35
the patch restores the performance to what it was before we removed
the bounce buffers, and then some: performance is better than ever
because we now allocate a bounce buffer the size of the maximum
single request the SDMA engine can handle. On the PCI laptop this
is 256K, whereas with the old bounce buffer code it was 64K max.
Cc: Benjamin Beckmeyer <beckmeyer.b(a)rittal.de>
Cc: Pierre Ossman <pierre(a)ossman.eu>
Cc: Benoît Thébaudeau <benoit(a)wsystem.com>
Cc: Fabio Estevam <fabio.estevam(a)nxp.com>
Cc: stable(a)vger.kernel.org
Fixes: de3ee99b097d ("mmc: Delete bounce buffer handling")
Signed-off-by: Linus Walleij <linus.walleij(a)linaro.org>
---
---
ChangeLog v3->v4:
- Cap the bounce buffer to 64KB instead of the biggest segment
as we experience diminishing returns with buffers > 64KB.
- Instead of using dma_alloc_coherent(), use good old devm_kmalloc()
and issue dma_sync_single_for*() to explicitly switch
ownership between CPU and the device. This way we exercise the
cache better and may consume less CPU.
- Bail out with single segments if we cannot allocate a bounce
buffer.
- Tested on the PCI SDHCI on my laptop: requesting a new test
on i.MX from Benjamin. (Please!)
ChangeLog v2->v3:
- Rewrite the commit message a bit
- Add Benjamin's Tested-by
- Add Fixes and stable tags
ChangeLog v1->v2:
- Skip the remapping and fiddling with the buffer, instead use
dma_alloc_coherent() and use a simple, coherent bounce buffer.
- Couple kernel messages to ->parent of the mmc_host as it relates
to the hardware characteristics.
---
drivers/mmc/host/sdhci.c | 125 ++++++++++++++++++++++++++++++++++++++++++++---
drivers/mmc/host/sdhci.h | 3 ++
2 files changed, 120 insertions(+), 8 deletions(-)
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index e9290a3439d5..694a320d9444 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -21,6 +21,7 @@
#include <linux/dma-mapping.h>
#include <linux/slab.h>
#include <linux/scatterlist.h>
+#include <linux/sizes.h>
#include <linux/swiotlb.h>
#include <linux/regulator/consumer.h>
#include <linux/pm_runtime.h>
@@ -502,8 +503,27 @@ static int sdhci_pre_dma_transfer(struct sdhci_host *host,
if (data->host_cookie == COOKIE_PRE_MAPPED)
return data->sg_count;
- sg_count = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
- mmc_get_dma_dir(data));
+ /* Bounce write requests to the bounce buffer */
+ if (host->bounce_buffer) {
+ if (mmc_get_dma_dir(data) == DMA_TO_DEVICE) {
+ /* Copy the data to the bounce buffer */
+ sg_copy_to_buffer(data->sg, data->sg_len,
+ host->bounce_buffer,
+ host->bounce_buffer_size);
+ }
+ /* Switch ownership to the DMA */
+ dma_sync_single_for_device(host->mmc->parent,
+ host->bounce_addr,
+ host->bounce_buffer_size,
+ DMA_TO_DEVICE);
+ /* Just a dummy value */
+ sg_count = 1;
+ } else {
+ /* Just access the data directly from memory */
+ sg_count = dma_map_sg(mmc_dev(host->mmc),
+ data->sg, data->sg_len,
+ mmc_get_dma_dir(data));
+ }
if (sg_count == 0)
return -ENOSPC;
@@ -858,8 +878,13 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
SDHCI_ADMA_ADDRESS_HI);
} else {
WARN_ON(sg_cnt != 1);
- sdhci_writel(host, sg_dma_address(data->sg),
- SDHCI_DMA_ADDRESS);
+ /* Bounce buffer goes to work */
+ if (host->bounce_buffer)
+ sdhci_writel(host, host->bounce_addr,
+ SDHCI_DMA_ADDRESS);
+ else
+ sdhci_writel(host, sg_dma_address(data->sg),
+ SDHCI_DMA_ADDRESS);
}
}
@@ -2248,7 +2273,12 @@ static void sdhci_pre_req(struct mmc_host *mmc, struct mmc_request *mrq)
mrq->data->host_cookie = COOKIE_UNMAPPED;
- if (host->flags & SDHCI_REQ_USE_DMA)
+ /*
+ * No pre-mapping in the pre hook if we're using the bounce buffer,
+ * for that we would need two bounce buffers since one buffer is
+ * in flight when this is getting called.
+ */
+ if (host->flags & SDHCI_REQ_USE_DMA && !host->bounce_buffer)
sdhci_pre_dma_transfer(host, mrq->data, COOKIE_PRE_MAPPED);
}
@@ -2352,8 +2382,28 @@ static bool sdhci_request_done(struct sdhci_host *host)
struct mmc_data *data = mrq->data;
if (data && data->host_cookie == COOKIE_MAPPED) {
- dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
- mmc_get_dma_dir(data));
+ if (host->bounce_buffer) {
+ /*
+ * On reads, copy the bounced data into the
+ * sglist
+ */
+ if (mmc_get_dma_dir(data) == DMA_FROM_DEVICE) {
+ dma_sync_single_for_cpu(
+ host->mmc->parent,
+ host->bounce_addr,
+ host->bounce_buffer_size,
+ DMA_FROM_DEVICE);
+ sg_copy_from_buffer(data->sg,
+ data->sg_len,
+ host->bounce_buffer,
+ host->bounce_buffer_size);
+ }
+ } else {
+ /* Unmap the raw data */
+ dma_unmap_sg(mmc_dev(host->mmc), data->sg,
+ data->sg_len,
+ mmc_get_dma_dir(data));
+ }
data->host_cookie = COOKIE_UNMAPPED;
}
}
@@ -2636,7 +2686,12 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
*/
if (intmask & SDHCI_INT_DMA_END) {
u32 dmastart, dmanow;
- dmastart = sg_dma_address(host->data->sg);
+
+ if (host->bounce_buffer)
+ dmastart = host->bounce_addr;
+ else
+ dmastart = sg_dma_address(host->data->sg);
+
dmanow = dmastart + host->data->bytes_xfered;
/*
* Force update to the next DMA block boundary.
@@ -3713,6 +3768,60 @@ int sdhci_setup_host(struct sdhci_host *host)
*/
mmc->max_blk_count = (host->quirks & SDHCI_QUIRK_NO_MULTIBLOCK) ? 1 : 65535;
+ if (mmc->max_segs == 1) {
+ unsigned int max_blocks;
+ unsigned int max_seg_size;
+
+ /*
+ * Cap the bounce buffer at 64KB. Using a bigger bounce buffer
+ * has diminishing returns, this is probably because SD/MMC
+ * cards are usually optimized to handle this size of requests.
+ */
+ max_seg_size = SZ_64K;
+ if (mmc->max_req_size < max_seg_size)
+ max_seg_size = mmc->max_req_size;
+ max_blocks = max_seg_size / 512;
+ dev_info(mmc->parent,
+ "host only supports SDMA, activate bounce buffer\n");
+
+ /*
+ * When we just support one segment, we can get significant
+ * speedups by the help of a bounce buffer to group scattered
+ * reads/writes together.
+ */
+ host->bounce_buffer = devm_kmalloc(mmc->parent,
+ max_seg_size,
+ GFP_KERNEL);
+ if (!host->bounce_buffer) {
+ dev_err(mmc->parent,
+ "failed to allocate %u bytes for bounce buffer, falling back to single segments\n",
+ max_seg_size);
+ /*
+ * Exiting with zero here makes sure we proceed with
+ * mmc->max_segs == 1.
+ */
+ return 0;
+ }
+
+ host->bounce_buffer_size = max_seg_size;
+ host->bounce_addr = dma_map_single(mmc->parent,
+ host->bounce_buffer,
+ host->bounce_buffer_size,
+ DMA_BIDIRECTIONAL);
+ ret = dma_mapping_error(mmc->parent, host->bounce_addr);
+ if (ret)
+ /* Again fall back to max_segs == 1 */
+ return 0;
+
+ /* Lie about this since we're bouncing */
+ mmc->max_segs = max_blocks;
+ mmc->max_seg_size = max_seg_size;
+
+ dev_info(mmc->parent,
+ "bounce buffer: bounce up to %u segments into one, max segment size %u bytes\n",
+ max_blocks, max_seg_size);
+ }
+
return 0;
unreg:
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index 54bc444c317f..865e09618d22 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -440,6 +440,9 @@ struct sdhci_host {
int irq; /* Device IRQ */
void __iomem *ioaddr; /* Mapped address */
+ char *bounce_buffer; /* For packing SDMA reads/writes */
+ dma_addr_t bounce_addr;
+ size_t bounce_buffer_size;
const struct sdhci_ops *ops; /* Low level hw interface */
--
2.14.3
From: Punit Agrawal <punit.agrawal(a)arm.com>
KVM only supports PMD hugepages at stage 2 but doesn't actually check
that the provided hugepage memory pagesize is PMD_SIZE before populating
stage 2 entries.
In cases where the backing hugepage size is smaller than PMD_SIZE (such
as when using contiguous hugepages), KVM can end up creating stage 2
mappings that extend beyond the supplied memory.
Fix this by checking for the pagesize of userspace vma before creating
PMD hugepage at stage 2.
Fixes: 66b3923a1a0f77a ("arm64: hugetlb: add support for PTE contiguous bit")
Signed-off-by: Punit Agrawal <punit.agrawal(a)arm.com>
Cc: Marc Zyngier <marc.zyngier(a)arm.com>
Cc: <stable(a)vger.kernel.org> # v4.5+
Reviewed-by: Christoffer Dall <christoffer.dall(a)linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall(a)linaro.org>
---
virt/kvm/arm/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index b4b69c2d1012..9dea96380339 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1310,7 +1310,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
return -EFAULT;
}
- if (is_vm_hugetlb_page(vma) && !logging_active) {
+ if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) {
hugetlb = true;
gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
} else {
--
2.14.2
Starting from commit 041e4575f034 ("mtd: nand: handle ECC errors in
OOB"), nand_do_read_oob() (from the NAND core) did return 0 or a
negative error, and the MTD layer expected it.
However, the trend for the NAND layer is now to return an error or a
positive number of bitflips. Deciding which status to return to the user
belongs to the MTD layer.
Commit e47f68587b82 ("mtd: check for max_bitflips in mtd_read_oob()")
brought this logic to the mtd_read_oob() function while the return value
coming from nand_do_read_oob() (called by the ->_read_oob() hook) was
left unchanged.
Fixes: e47f68587b82 ("mtd: check for max_bitflips in mtd_read_oob()")
Cc: stable(a)vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal(a)free-electrons.com>
---
Changes since v1:
- s/->ecc.read_oob() hook/->_read_oob() hook/ in the commit message
- Fixed the compilation issue
drivers/mtd/nand/nand_base.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c
index 469220065b8b..440d9f5d5b17 100644
--- a/drivers/mtd/nand/nand_base.c
+++ b/drivers/mtd/nand/nand_base.c
@@ -3876,7 +3876,7 @@ static int nand_do_read_oob(struct mtd_info *mtd, loff_t from,
if (mtd->ecc_stats.failed - stats.failed)
return -EBADMSG;
- return mtd->ecc_stats.corrected - stats.corrected ? -EUCLEAN : 0;
+ return ret;
}
/**
--
2.11.0
On Fri, 2018-01-12 at 08:23 +0200, Leon Romanovsky wrote:
> On Thu, Jan 11, 2018 at 07:07:06PM +0000, Bart Van Assche wrote:
> > On Thu, 2018-01-11 at 21:00 +0200, Leon Romanovsky wrote:
> > > On Thu, Jan 11, 2018 at 04:02:33PM +0000, Bart Van Assche wrote:
> > > > On Thu, 2018-01-11 at 08:22 +0200, Leon Romanovsky wrote:
> > > > > The proposed patch definitely decreases the chance of races, but it is not fixing them.
> > > > > There is a chance to have change in qp state immediately after your "if ..." check.
> > > >
> > > > Hello Leon,
> > > >
> > > > Please have a look at rxe_qp_error() and you will see that the patch I posted
> > > > is a proper fix. In the scenario you described rxe_qp_error() will trigger a
> > > > run of rxe_completer().
> > >
> > > Bart,
> > >
> > > What am I missing?
> > >
> > > CPU1 CPU2
> > > if (unlikely....
> > > <---
> > > /* move the qp to the error state */
> > > void rxe_qp_error(struct rxe_qp *qp)
> > > {
> > > qp->req.state = QP_STATE_ERROR;
> > > qp->resp.state = QP_STATE_ERROR;
> > > qp->attr.qp_state = IB_QPS_ERR;
> > > --->
> > > rxe_run_task(&qp->req.task, must_sched);
> > >
> > >
> > >
> > > It is more or less the same as without "if (unlikely..."
> >
> > Hello Leon,
> >
> > In the above the part of rxe_qp_error() that I was referring to in my e-mail
> > is missing:
> >
> > if (qp_type(qp) == IB_QPT_RC)
> > rxe_run_task(&qp->comp.task, 1);
>
>
> But it is exactly where race exists, as long QP isn't protected, it can
> switch CPUs and create race.
Hello Leon,
Can you clarify which race you are referring to? rxe_run_task() uses the
tasklet mechanism and tasklets are guaranteed to run on at most one CPU at a
time. See also the "Top and Bottom Halves" chapter in Linux Device Drivers,
3rd edition. See also the tasklet_schedule() implementation in
<linux/interrupt.h> and in kernel/softirq.c.
Thanks,
Bart.
Hi Arnd,
On Thursday 11 January 2018 11:46 PM, Eric Anholt wrote:
> Arnd Bergmann <arnd(a)arndb.de> writes:
>
>> On Thu, Jan 11, 2018 at 2:30 PM, Kishon Vijay Abraham I <kishon(a)ti.com> wrote:
>>> On Thursday 11 January 2018 02:27 AM, Arnd Bergmann wrote:
>>>> On Mon, Jan 8, 2018 at 7:32 PM, Kishon Vijay Abraham I <kishon(a)ti.com> wrote:
>>>>> On Monday 08 January 2018 06:31 PM, Arnd Bergmann wrote:
>>>>>> Stefan Wahren reports a problem with a warning fix that was merged
>>>>>> ---
>>>>>> This obviously needs to be tested, I wrote this up as a reply to
>>>>>> Stefan's bug report. I'm fairly sure that I covered all usb-phy
>>>>>> driver strings here. My goal is to have a fix merged into 4.15
>>>>>> rather than reverting all the DT fixes.
>>>>>
>>>>> Shouldn't the fix be in phy consumer drivers to not return error if it's able
>>>>> to find the phy either using usb-phy or generic phy?
>>>>
>>>> Stefan has posted a patch to that effect now, but I fear that might be
>>>> a little fragile, in particular this short before the release with the
>>>> regression
>>>> in place.
>>>>
>>>> The main problem is that we'd have to change the generic
>>>> usb_add_hcd() function in addition to dwc2 and dwc3 to ignore
>>>> -EPROBE_DEFER from phy_get() whenever usb_get_phy_dev()
>>>> has already succeeded.
>>>>
>>>> If there is any HCD that relies on usb_add_hcd() to get both the
>>>> usb_phy and the phy structures, and it may need to defer probing
>>>> when the latter one isn't ready yet, that fix would break another
>>>> driver.
>>>
>>> hmm.. IMO the better thing right now would be to revert the dt patch which adds
>>> #phy-cells.
>>> We have to see if there are better fixes in order to add #phy-cells warning fix
>>> in stable tree.
>>
>> Let's see which patches that would be, I think this is the full list of
>> nodes that got an extra #phy-cells:
>>
>> c22fe696157d ARM: dts: Fix dm814x missing phy-cells property
>> f0e11ff8ff65 ARM: dts: am33xx: Add missing #phy-cells to ti,am335x-usb-phy
>> c5bbf358b790 arm: dts: nspire: Add missing #phy-cells to usb-nop-xceiv
>> 44e5dced2ef6 arm: dts: marvell: Add missing #phy-cells to usb-nop-xceiv
>> 014d6da6cb25 ARM: dts: bcm283x: Fix DTC warnings about missing phy-cells
>> f568f6f554b8 ARM: dts: omap: Add missing #phy-cells to usb-nop-xceiv
>>
>> plus a couple in linux-next:
>>
>> d745d5f277bf ARM: dts: imx51-zii-rdu1: Add missing #phy-cells to usb-nop-xceiv
>> 915fbe59cbf2 ARM: dts: imx: Add missing #phy-cells to usb-nop-xceiv
>>
>> It's a lot of patches to revert, and I guess it would get us back to hundreds
>> of warnings in an allmodconfig build, so I'd first try to come up with
>> ways to prove that at least some of them can stay.
>>
>> Almost all the warnings are about "usb-nop-xceiv" phys, the only exceptions
>> I could find are the OMAP ones (the first two patches), which use
>> "ti,am335x-usb-phy" and are referenced from a "ti,musb-am33xx". That
>> particular driver is not affected by the bug, so we can leave that in.
>>
>> To deal with all the "usb-nop-xceiv" references including the one that
>> Stefan reported, we could use a much simpler version of my earlier
>> patch, do you think this is any better?
yeah, this looks simpler.
>>
>> Signed-off-by: Arnd Bergmann <arnd(a)arndb.de>
In case you want to take this patch yourself
Acked-by: Kishon Vijay Abraham I <kishon(a)ti.com>
(or let me know if I have to create a separate pull request for Greg)
Thanks
Kishon
>>
>> diff --git a/drivers/phy/phy-core.c b/drivers/phy/phy-core.c
>> index b4964b067aec..f056d8fb3921 100644
>> --- a/drivers/phy/phy-core.c
>> +++ b/drivers/phy/phy-core.c
>> @@ -410,6 +410,10 @@ static struct phy *_of_phy_get(struct device_node
>> *np, int index)
>> if (ret)
>> return ERR_PTR(-ENODEV);
>>
>> + /* This phy type handled by the usb-phy subsystem for now */
>> + if (of_device_is_compatible("usb-nop-xceiv"))
>> + return ERR_PTR(-ENODEV);
>> +
>> mutex_lock(&phy_provider_mutex);
>> phy_provider = of_phy_provider_lookup(args.np);
>> if (IS_ERR(phy_provider) || !try_module_get(phy_provider->owner)) {
>
> This seems like a nice workaround!
>