From: Hans Verkuil hverkuil@xs4all.nl
[ Upstream commit 7cf7b2e977abf3f992036939e35a8eab60013aff ]
Lower the minimum height to 360 to be consistent with the webcam input of vivid.
The 480 was rather arbitrary but it made it harder to use vivid as a source for encoding since the default resolution when you load vivid is 640x360.
Signed-off-by: Hans Verkuil hans.verkuil@cisco.com Reviewed-by: Kieran Bingham kieran.bingham+renesas@ideasonboard.com Signed-off-by: Hans Verkuil hverkuil@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+samsung@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/vicodec/vicodec-core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/platform/vicodec/vicodec-core.c b/drivers/media/platform/vicodec/vicodec-core.c index 408cd55d3580..daa5caa6adc6 100644 --- a/drivers/media/platform/vicodec/vicodec-core.c +++ b/drivers/media/platform/vicodec/vicodec-core.c @@ -42,7 +42,7 @@ MODULE_PARM_DESC(debug, " activates debug info"); #define MAX_WIDTH 4096U #define MIN_WIDTH 640U #define MAX_HEIGHT 2160U -#define MIN_HEIGHT 480U +#define MIN_HEIGHT 360U
#define dprintk(dev, fmt, arg...) \ v4l2_dbg(1, debug, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg)
From: Hans Verkuil hverkuil@xs4all.nl
[ Upstream commit 55623b4169056d7bb493d1c6f715991f8db67302 ]
During the configuration phase of a CEC adapter it is trying to claim a free logical address by polling.
However, the code doesn't check if there were errors other than OK or NACK, those are just treated as if the poll was NACKed.
Instead check for such errors and retry the poll. And if the problem persists then don't claim this LA since there is something weird going on.
Signed-off-by: Hans Verkuil hans.verkuil@cisco.com Signed-off-by: Hans Verkuil hverkuil@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+samsung@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/cec/cec-adap.c | 47 ++++++++++++++++++++++++++++-------- 1 file changed, 37 insertions(+), 10 deletions(-)
diff --git a/drivers/media/cec/cec-adap.c b/drivers/media/cec/cec-adap.c index dd8bad74a1f0..a537e518384b 100644 --- a/drivers/media/cec/cec-adap.c +++ b/drivers/media/cec/cec-adap.c @@ -1167,6 +1167,8 @@ static int cec_config_log_addr(struct cec_adapter *adap, { struct cec_log_addrs *las = &adap->log_addrs; struct cec_msg msg = { }; + const unsigned int max_retries = 2; + unsigned int i; int err;
if (cec_has_log_addr(adap, log_addr)) @@ -1175,19 +1177,44 @@ static int cec_config_log_addr(struct cec_adapter *adap, /* Send poll message */ msg.len = 1; msg.msg[0] = (log_addr << 4) | log_addr; - err = cec_transmit_msg_fh(adap, &msg, NULL, true);
- /* - * While trying to poll the physical address was reset - * and the adapter was unconfigured, so bail out. - */ - if (!adap->is_configuring) - return -EINTR; + for (i = 0; i < max_retries; i++) { + err = cec_transmit_msg_fh(adap, &msg, NULL, true);
- if (err) - return err; + /* + * While trying to poll the physical address was reset + * and the adapter was unconfigured, so bail out. + */ + if (!adap->is_configuring) + return -EINTR; + + if (err) + return err;
- if (msg.tx_status & CEC_TX_STATUS_OK) + /* + * The message was aborted due to a disconnect or + * unconfigure, just bail out. + */ + if (msg.tx_status & CEC_TX_STATUS_ABORTED) + return -EINTR; + if (msg.tx_status & CEC_TX_STATUS_OK) + return 0; + if (msg.tx_status & CEC_TX_STATUS_NACK) + break; + /* + * Retry up to max_retries times if the message was neither + * OKed or NACKed. This can happen due to e.g. a Lost + * Arbitration condition. + */ + } + + /* + * If we are unable to get an OK or a NACK after max_retries attempts + * (and note that each attempt already consists of four polls), then + * then we assume that something is really weird and that it is not a + * good idea to try and claim this logical address. + */ + if (i == max_retries) return 0;
/*
From: Sakari Ailus sakari.ailus@linux.intel.com
[ Upstream commit 30efae3d789cd0714ef795545a46749236e29558 ]
While there are issues related to object lifetime management, unregister the media device first when the driver is being unbound. This is slightly safer.
Signed-off-by: Sakari Ailus sakari.ailus@linux.intel.com Reviewed-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Signed-off-by: Mauro Carvalho Chehab mchehab+samsung@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/omap3isp/isp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c index 842e2235047d..432bc7fbedc9 100644 --- a/drivers/media/platform/omap3isp/isp.c +++ b/drivers/media/platform/omap3isp/isp.c @@ -1587,6 +1587,8 @@ static void isp_pm_complete(struct device *dev)
static void isp_unregister_entities(struct isp_device *isp) { + media_device_unregister(&isp->media_dev); + omap3isp_csi2_unregister_entities(&isp->isp_csi2a); omap3isp_ccp2_unregister_entities(&isp->isp_ccp2); omap3isp_ccdc_unregister_entities(&isp->isp_ccdc); @@ -1597,7 +1599,6 @@ static void isp_unregister_entities(struct isp_device *isp) omap3isp_stat_unregister_entities(&isp->isp_hist);
v4l2_device_unregister(&isp->v4l2_dev); - media_device_unregister(&isp->media_dev); media_device_cleanup(&isp->media_dev); }
From: Sakari Ailus sakari.ailus@linux.intel.com
[ Upstream commit 32388d6ef7cffc7d8291b67f8dfa26acd45217fd ]
While there are issues related to object lifetime management, unregister the media device first, followed immediately by other device nodes when the driver is being unbound. Only then the resources needed by the driver may be released. This is slightly safer.
Signed-off-by: Sakari Ailus sakari.ailus@linux.intel.com Tested-by: Bingbu Cao bingbu.cao@intel.com Reviewed-by: Bingbu Cao bingbu.cao@intel.com Signed-off-by: Mauro Carvalho Chehab mchehab+samsung@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/pci/intel/ipu3/ipu3-cio2.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2.c b/drivers/media/pci/intel/ipu3/ipu3-cio2.c index 29027159eced..ca1a4d8e972e 100644 --- a/drivers/media/pci/intel/ipu3/ipu3-cio2.c +++ b/drivers/media/pci/intel/ipu3/ipu3-cio2.c @@ -1846,12 +1846,12 @@ static void cio2_pci_remove(struct pci_dev *pci_dev) struct cio2_device *cio2 = pci_get_drvdata(pci_dev); unsigned int i;
+ media_device_unregister(&cio2->media_dev); cio2_notifier_exit(cio2); - cio2_fbpt_exit_dummy(cio2); for (i = 0; i < CIO2_QUEUES; i++) cio2_queue_exit(cio2, &cio2->queue[i]); + cio2_fbpt_exit_dummy(cio2); v4l2_device_unregister(&cio2->v4l2_dev); - media_device_unregister(&cio2->media_dev); media_device_cleanup(&cio2->media_dev); mutex_destroy(&cio2->lock); }
From: Lu Baolu baolu.lu@linux.intel.com
[ Upstream commit 19ed3e2dd8549c1a34914e8dad01b64e7837645a ]
When handling page request without pasid event, go to "no_pasid" branch instead of "bad_req". Otherwise, a NULL pointer deference will happen there.
Cc: Ashok Raj ashok.raj@intel.com Cc: Jacob Pan jacob.jun.pan@linux.intel.com Cc: Sohil Mehta sohil.mehta@intel.com Signed-off-by: Lu Baolu baolu.lu@linux.intel.com Fixes: a222a7f0bb6c9 'iommu/vt-d: Implement page request handling' Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/intel-svm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c index 4a03e5090952..188f4eaed6e5 100644 --- a/drivers/iommu/intel-svm.c +++ b/drivers/iommu/intel-svm.c @@ -596,7 +596,7 @@ static irqreturn_t prq_event_thread(int irq, void *d) pr_err("%s: Page request without PASID: %08llx %08llx\n", iommu->name, ((unsigned long long *)req)[0], ((unsigned long long *)req)[1]); - goto bad_req; + goto no_pasid; }
if (!svm || svm->pasid != req->pasid) {
From: Rafał Miłecki rafal@milecki.pl
[ Upstream commit 3401d42c7ea2d064d15c66698ff8eb96553179ce ]
Previous commit /adding/ support for 160 MHz chanspecs was incomplete. It didn't set bandwidth info and didn't extract control channel info. As the result it was also using uninitialized "sb" var.
This change has been tested for two chanspecs found to be reported by some devices/firmwares: 1) 60/160 (0xee32) Before: chnum:50 control_ch_num:36 After: chnum:50 control_ch_num:60 2) 120/160 (0xed72) Before: chnum:114 control_ch_num:100 After: chnum:114 control_ch_num:120
Fixes: 330994e8e8ec ("brcmfmac: fix for proper support of 160MHz bandwidth") Signed-off-by: Rafał Miłecki rafal@milecki.pl Signed-off-by: Kalle Valo kvalo@codeaurora.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c index e7584b842dce..eb5db94f5745 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c @@ -193,6 +193,9 @@ static void brcmu_d11ac_decchspec(struct brcmu_chan *ch) } break; case BRCMU_CHSPEC_D11AC_BW_160: + ch->bw = BRCMU_CHAN_BW_160; + ch->sb = brcmu_maskget16(ch->chspec, BRCMU_CHSPEC_D11AC_SB_MASK, + BRCMU_CHSPEC_D11AC_SB_SHIFT); switch (ch->sb) { case BRCMU_CHAN_SB_LLL: ch->control_ch_num -= CH_70MHZ_APART;
Sasha Levin sashal@kernel.org writes:
From: Rafał Miłecki rafal@milecki.pl
[ Upstream commit 3401d42c7ea2d064d15c66698ff8eb96553179ce ]
Previous commit /adding/ support for 160 MHz chanspecs was incomplete. It didn't set bandwidth info and didn't extract control channel info. As the result it was also using uninitialized "sb" var.
This change has been tested for two chanspecs found to be reported by some devices/firmwares:
- 60/160 (0xee32) Before: chnum:50 control_ch_num:36 After: chnum:50 control_ch_num:60
- 120/160 (0xed72) Before: chnum:114 control_ch_num:100 After: chnum:114 control_ch_num:120
Fixes: 330994e8e8ec ("brcmfmac: fix for proper support of 160MHz bandwidth") Signed-off-by: Rafał Miłecki rafal@milecki.pl Signed-off-by: Kalle Valo kvalo@codeaurora.org Signed-off-by: Sasha Levin sashal@kernel.org
Please mark your stable patches with:
X-Patchwork-Hint: ignore
This way patchwork ignores them and this saves my time. Thank you.
On Thu, Nov 29, 2018 at 01:49:43PM +0200, Kalle Valo wrote:
Sasha Levin sashal@kernel.org writes:
From: Rafał Miłecki rafal@milecki.pl
[ Upstream commit 3401d42c7ea2d064d15c66698ff8eb96553179ce ]
Previous commit /adding/ support for 160 MHz chanspecs was incomplete. It didn't set bandwidth info and didn't extract control channel info. As the result it was also using uninitialized "sb" var.
This change has been tested for two chanspecs found to be reported by some devices/firmwares:
- 60/160 (0xee32) Before: chnum:50 control_ch_num:36 After: chnum:50 control_ch_num:60
- 120/160 (0xed72) Before: chnum:114 control_ch_num:100 After: chnum:114 control_ch_num:120
Fixes: 330994e8e8ec ("brcmfmac: fix for proper support of 160MHz bandwidth") Signed-off-by: Rafał Miłecki rafal@milecki.pl Signed-off-by: Kalle Valo kvalo@codeaurora.org Signed-off-by: Sasha Levin sashal@kernel.org
Please mark your stable patches with:
X-Patchwork-Hint: ignore
This way patchwork ignores them and this saves my time. Thank you.
Will do, thanks!
-- Thanks, Sasha
From: Arnd Bergmann arnd@arndb.de
[ Upstream commit b374e8686fc35ae124e62dc78725ea656ba1ef8a ]
When CONFIG_LEDS_CLASS is disabled, or it is a loadable module while mt76 is built-in, we run into a link error:
drivers/net/wireless/mediatek/mt76/mac80211.o: In function `mt76_register_device': mac80211.c:(.text+0xb78): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `devm_of_led_classdev_register'
We don't really need a hard dependency here as the driver can presumably work just fine without LEDs, so this follows the iwlwifi example and adds a separate Kconfig option for the LED support, this will be available whenever it will link, and otherwise the respective code gets left out from the driver object.
Fixes: 17f1de56df05 ("mt76: add common code shared between multiple chipsets") Signed-off-by: Arnd Bergmann arnd@arndb.de Signed-off-by: Lorenzo Bianconi lorenzo.bianconi@redhat.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/mediatek/mt76/Kconfig | 6 ++++++ drivers/net/wireless/mediatek/mt76/mac80211.c | 8 +++++--- drivers/net/wireless/mediatek/mt76/mt76x2_init.c | 6 ++++-- 3 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/drivers/net/wireless/mediatek/mt76/Kconfig b/drivers/net/wireless/mediatek/mt76/Kconfig index b6c5f17dca30..27826217ff76 100644 --- a/drivers/net/wireless/mediatek/mt76/Kconfig +++ b/drivers/net/wireless/mediatek/mt76/Kconfig @@ -1,6 +1,12 @@ config MT76_CORE tristate
+config MT76_LEDS + bool + depends on MT76_CORE + depends on LEDS_CLASS=y || MT76_CORE=LEDS_CLASS + default y + config MT76_USB tristate depends on MT76_CORE diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c index 029d54bce9e8..ade4a2029a24 100644 --- a/drivers/net/wireless/mediatek/mt76/mac80211.c +++ b/drivers/net/wireless/mediatek/mt76/mac80211.c @@ -342,9 +342,11 @@ int mt76_register_device(struct mt76_dev *dev, bool vht, mt76_check_sband(dev, NL80211_BAND_2GHZ); mt76_check_sband(dev, NL80211_BAND_5GHZ);
- ret = mt76_led_init(dev); - if (ret) - return ret; + if (IS_ENABLED(CONFIG_MT76_LEDS)) { + ret = mt76_led_init(dev); + if (ret) + return ret; + }
return ieee80211_register_hw(hw); } diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_init.c b/drivers/net/wireless/mediatek/mt76/mt76x2_init.c index b814391f79ac..03b103c45d69 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x2_init.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x2_init.c @@ -581,8 +581,10 @@ int mt76x2_register_device(struct mt76x2_dev *dev) mt76x2_dfs_init_detector(dev);
/* init led callbacks */ - dev->mt76.led_cdev.brightness_set = mt76x2_led_set_brightness; - dev->mt76.led_cdev.blink_set = mt76x2_led_set_blink; + if (IS_ENABLED(CONFIG_MT76_LEDS)) { + dev->mt76.led_cdev.brightness_set = mt76x2_led_set_brightness; + dev->mt76.led_cdev.blink_set = mt76x2_led_set_blink; + }
ret = mt76_register_device(&dev->mt76, true, mt76x2_rates, ARRAY_SIZE(mt76x2_rates));
From: Geert Uytterhoeven geert+renesas@glider.be
[ Upstream commit e5b78f2e349eef5d4fca5dc1cf5a3b4b2cc27abd ]
If iommu_ops.add_device() fails, iommu_ops.domain_free() is still called, leading to a crash, as the domain was only partially initialized:
ipmmu-vmsa e67b0000.mmu: Cannot accommodate DMA translation for IOMMU page tables sata_rcar ee300000.sata: Unable to initialize IPMMU context iommu: Failed to add device ee300000.sata to group 0: -22 Unable to handle kernel NULL pointer dereference at virtual address 0000000000000038 ... Call trace: ipmmu_domain_free+0x1c/0xa0 iommu_group_release+0x48/0x68 kobject_put+0x74/0xe8 kobject_del.part.0+0x3c/0x50 kobject_put+0x60/0xe8 iommu_group_get_for_dev+0xa8/0x1f0 ipmmu_add_device+0x1c/0x40 of_iommu_configure+0x118/0x190
Fix this by checking if the domain's context already exists, before trying to destroy it.
Signed-off-by: Geert Uytterhoeven geert+renesas@glider.be Reviewed-by: Robin Murphy robin.murphy@arm.com Fixes: d25a2a16f0889 ('iommu: Add driver for Renesas VMSA-compatible IPMMU') Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/ipmmu-vmsa.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c index 22b94f8a9a04..d8598e44e381 100644 --- a/drivers/iommu/ipmmu-vmsa.c +++ b/drivers/iommu/ipmmu-vmsa.c @@ -501,6 +501,9 @@ static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain)
static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain) { + if (!domain->mmu) + return; + /* * Disable the context. Flush the TLB as required when modifying the * context registers.
From: Wei Li liwei213@huawei.com
[ Upstream commit 8e4829c6f7470722c1f5a1dc5769ebe09ef036d6 ]
Hynix ufs has deviations on hi36xx platform which will result in ufs bursts transfer failures.
To fix the problem, the Hynix device must set the register VS_DebugSaveConfigTime to 0x10, which will set time reference for SaveConfigTime is 250 ns. The time reference for SaveConfigTime is 40 ns by default.
This patch is necessary to boot on HiKey960 boards that use Hynix UFS chips (H28U62301AMR model: hB8aL1).
Cc: Vinayak Holikatti vinholikatti@gmail.com Cc: "James E.J. Bottomley" jejb@linux.vnet.ibm.com Cc: "Martin K. Petersen" martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org Signed-off-by: Wei Li liwei213@huawei.com Signed-off-by: Dmitry Shmidt dimitrysh@google.com [jstultz: Forward ported from older code, slight tweak to commit message] Signed-off-by: John Stultz john.stultz@linaro.org Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/ufs/ufs-hisi.c | 9 +++++++++ drivers/scsi/ufs/ufs_quirks.h | 6 ++++++ drivers/scsi/ufs/ufshcd.c | 2 ++ 3 files changed, 17 insertions(+)
diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c index 46df707e6f2c..452e19f8fb47 100644 --- a/drivers/scsi/ufs/ufs-hisi.c +++ b/drivers/scsi/ufs/ufs-hisi.c @@ -20,6 +20,7 @@ #include "unipro.h" #include "ufs-hisi.h" #include "ufshci.h" +#include "ufs_quirks.h"
static int ufs_hisi_check_hibern8(struct ufs_hba *hba) { @@ -390,6 +391,14 @@ static void ufs_hisi_set_dev_cap(struct ufs_hisi_dev_params *hisi_param)
static void ufs_hisi_pwr_change_pre_change(struct ufs_hba *hba) { + if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME) { + pr_info("ufs flash device must set VS_DebugSaveConfigTime 0x10\n"); + /* VS_DebugSaveConfigTime */ + ufshcd_dme_set(hba, UIC_ARG_MIB(0xD0A0), 0x10); + /* sync length */ + ufshcd_dme_set(hba, UIC_ARG_MIB(0x1556), 0x48); + } + /* update */ ufshcd_dme_set(hba, UIC_ARG_MIB(0x15A8), 0x1); /* PA_TxSkip */ diff --git a/drivers/scsi/ufs/ufs_quirks.h b/drivers/scsi/ufs/ufs_quirks.h index 71f73d1d1ad1..5d2dfdb41a6f 100644 --- a/drivers/scsi/ufs/ufs_quirks.h +++ b/drivers/scsi/ufs/ufs_quirks.h @@ -131,4 +131,10 @@ struct ufs_dev_fix { */ #define UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME (1 << 8)
+/* + * Some UFS devices require VS_DebugSaveConfigTime is 0x10, + * enabling this quirk ensure this. + */ +#define UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME (1 << 9) + #endif /* UFS_QUIRKS_H_ */ diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 54074dd483a7..0b81d9d03357 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -230,6 +230,8 @@ static struct ufs_dev_fix ufs_fixups[] = { UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL, UFS_DEVICE_NO_VCCQ), UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL, UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME), + UFS_FIX(UFS_VENDOR_SKHYNIX, "hB8aL1" /*H28U62301AMR*/, + UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME),
END_FIX };
From: YueHaibing yuehaibing@huawei.com
[ Upstream commit 207681fc5f3d5d398f106d1ae0080fc2373f707a ]
Fixes gcc '-Wunused-but-set-variable' warning:
drivers/net/can/usb/ucan.c: In function 'ucan_disconnect': drivers/net/can/usb/ucan.c:1578:21: warning: variable 'udev' set but not used [-Wunused-but-set-variable] struct usb_device *udev;
Signed-off-by: YueHaibing yuehaibing@huawei.com Reviewed-by: Martin Elshuber martin.elshuber@theobroma-systems.com Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/can/usb/ucan.c | 3 --- 1 file changed, 3 deletions(-)
diff --git a/drivers/net/can/usb/ucan.c b/drivers/net/can/usb/ucan.c index 0678a38b1af4..c9fd83e8d947 100644 --- a/drivers/net/can/usb/ucan.c +++ b/drivers/net/can/usb/ucan.c @@ -1575,11 +1575,8 @@ static int ucan_probe(struct usb_interface *intf, /* disconnect the device */ static void ucan_disconnect(struct usb_interface *intf) { - struct usb_device *udev; struct ucan_priv *up = usb_get_intfdata(intf);
- udev = interface_to_usbdev(intf); - usb_set_intfdata(intf, NULL);
if (up) {
From: Fabrizio Castro fabrizio.castro@bp.renesas.com
[ Upstream commit 68c8d209cd4337da4fa04c672f0b62bb735969bc ]
Assigning 2 to "renesas,can-clock-select" tricks the driver into registering the CAN interface, even though we don't want that. This patch improves one of the checks to prevent that from happening.
Fixes: 862e2b6af9413b43 ("can: rcar_can: support all input clocks") Signed-off-by: Fabrizio Castro fabrizio.castro@bp.renesas.com Signed-off-by: Chris Paterson Chris.Paterson2@renesas.com Reviewed-by: Simon Horman horms+renesas@verge.net.au Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/can/rcar/rcar_can.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c index 11662f479e76..771a46083739 100644 --- a/drivers/net/can/rcar/rcar_can.c +++ b/drivers/net/can/rcar/rcar_can.c @@ -24,6 +24,9 @@
#define RCAR_CAN_DRV_NAME "rcar_can"
+#define RCAR_SUPPORTED_CLOCKS (BIT(CLKR_CLKP1) | BIT(CLKR_CLKP2) | \ + BIT(CLKR_CLKEXT)) + /* Mailbox configuration: * mailbox 60 - 63 - Rx FIFO mailboxes * mailbox 56 - 59 - Tx FIFO mailboxes @@ -789,7 +792,7 @@ static int rcar_can_probe(struct platform_device *pdev) goto fail_clk; }
- if (clock_select >= ARRAY_SIZE(clock_names)) { + if (!(BIT(clock_select) & RCAR_SUPPORTED_CLOCKS)) { err = -EINVAL; dev_err(&pdev->dev, "invalid CAN clock selected\n"); goto fail_clk;
From: Colin Ian King colin.king@canonical.com
[ Upstream commit 8bb0a88600f0267cfcc245d34f8c4abe8c282713 ]
In the case where eq->fw->size > PAGE_SIZE the error return rc is being set to EINVAL however this is being overwritten to rc = req->fw->size because the error exit path via label 'out' is not being taken. Fix this by adding the jump to the error exit path 'out'.
Detected by CoverityScan, CID#1453465 ("Unused value")
Fixes: c92316bf8e94 ("test_firmware: add batched firmware tests") Signed-off-by: Colin Ian King colin.king@canonical.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- lib/test_firmware.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/lib/test_firmware.c b/lib/test_firmware.c index b984806d7d7b..7cab9a9869ac 100644 --- a/lib/test_firmware.c +++ b/lib/test_firmware.c @@ -837,6 +837,7 @@ static ssize_t read_firmware_show(struct device *dev, if (req->fw->size > PAGE_SIZE) { pr_err("Testing interface must use PAGE_SIZE firmware for now\n"); rc = -EINVAL; + goto out; } memcpy(buf, req->fw->data, req->fw->size);
From: Benson Leung bleung@chromium.org
[ Upstream commit 0fd791841a6d67af1155a9c3de54dea51220721e ]
The Motorola/Zebra Symbol DS4308-HD is a handheld USB barcode scanner which does not have a battery, but reports one anyway that always has capacity 2.
Let's apply the IGNORE quirk to prevent it from being treated like a power supply so that userspaces don't get confused that this accessory is almost out of power and warn the user that they need to charge their wired barcode scanner.
Reported here: https://bugs.chromium.org/p/chromium/issues/detail?id=804720
Signed-off-by: Benson Leung bleung@chromium.org Reviewed-by: Benjamin Tissoires benjamin.tissoires@redhat.com Signed-off-by: Benjamin Tissoires benjamin.tissoires@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/hid-ids.h | 1 + drivers/hid/hid-input.c | 3 +++ 2 files changed, 4 insertions(+)
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index 501c05cbec7e..a2d25055cd7f 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -1038,6 +1038,7 @@ #define USB_VENDOR_ID_SYMBOL 0x05e0 #define USB_DEVICE_ID_SYMBOL_SCANNER_1 0x0800 #define USB_DEVICE_ID_SYMBOL_SCANNER_2 0x1300 +#define USB_DEVICE_ID_SYMBOL_SCANNER_3 0x1200
#define USB_VENDOR_ID_SYNAPTICS 0x06cb #define USB_DEVICE_ID_SYNAPTICS_TP 0x0001 diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c index a481eaf39e88..a3916e58dbf5 100644 --- a/drivers/hid/hid-input.c +++ b/drivers/hid/hid-input.c @@ -325,6 +325,9 @@ static const struct hid_device_id hid_battery_quirks[] = { { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_BM084), HID_BATTERY_QUIRK_IGNORE }, + { HID_USB_DEVICE(USB_VENDOR_ID_SYMBOL, + USB_DEVICE_ID_SYMBOL_SCANNER_3), + HID_BATTERY_QUIRK_IGNORE }, {} };
From: Sven Eckelmann sven@narfation.org
[ Upstream commit f4156f9656feac21f4de712fac94fae964c5d402 ]
The announcement messages of batman-adv COMPAT_VERSION 15 have the possibility to announce additional information via a dynamic TVLV part. This part is optional for the ELP packets and currently not parsed by the Linux implementation. Still out-of-tree versions are using it to transport things like neighbor hashes to optimize the rebroadcast behavior.
Since the ELP broadcast packets are smaller than the minimal ethernet packet, it often has to be padded. This is often done (as specified in RFC894) with octets of zero and thus work perfectly fine with the TVLV part (making it a zero length and thus empty). But not all ethernet compatible hardware seems to follow this advice. To avoid ambiguous situations when parsing the TVLV header, just force the 4 bytes (TVLV length + padding) after the required ELP header to zero.
Fixes: d6f94d91f766 ("batman-adv: ELP - adding basic infrastructure") Reported-by: Linus Lüssing linus.luessing@c0d3.blue Signed-off-by: Sven Eckelmann sven@narfation.org Signed-off-by: Simon Wunderlich sw@simonwunderlich.de Signed-off-by: Sasha Levin sashal@kernel.org --- net/batman-adv/bat_v_elp.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c index 9f481cfdf77d..e8090f099eb8 100644 --- a/net/batman-adv/bat_v_elp.c +++ b/net/batman-adv/bat_v_elp.c @@ -352,19 +352,21 @@ static void batadv_v_elp_periodic_work(struct work_struct *work) */ int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface) { + static const size_t tvlv_padding = sizeof(__be32); struct batadv_elp_packet *elp_packet; unsigned char *elp_buff; u32 random_seqno; size_t size; int res = -ENOMEM;
- size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN; + size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN + tvlv_padding; hard_iface->bat_v.elp_skb = dev_alloc_skb(size); if (!hard_iface->bat_v.elp_skb) goto out;
skb_reserve(hard_iface->bat_v.elp_skb, ETH_HLEN + NET_IP_ALIGN); - elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb, BATADV_ELP_HLEN); + elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb, + BATADV_ELP_HLEN + tvlv_padding); elp_packet = (struct batadv_elp_packet *)elp_buff;
elp_packet->packet_type = BATADV_ELP;
From: Sven Eckelmann sven@narfation.org
[ Upstream commit d7d8bbb40a5b1f682ee6589e212934f4c6b8ad60 ]
The complete size ("total_size") of the fragmented packet is stored in the fragment header and in the size of the fragment chain. When the fragments are ready for merge, the skbuff's tail of the first fragment is expanded to have enough room after the data pointer for at least total_size. This means that it gets expanded by total_size - first_skb->len.
But this is ignoring the fact that after expanding the buffer, the fragment header is pulled by from this buffer. Assuming that the tailroom of the buffer was already 0, the buffer after the data pointer of the skbuff is now only total_size - len(fragment_header) large. When the merge function is then processing the remaining fragments, the code to copy the data over to the merged skbuff will cause an skb_over_panic when it tries to actually put enough data to fill the total_size bytes of the packet.
The size of the skb_pull must therefore also be taken into account when the buffer's tailroom is expanded.
Fixes: 610bfc6bc99b ("batman-adv: Receive fragmented packets and merge") Reported-by: Martin Weinelt martin@darmstadt.freifunk.net Co-authored-by: Linus Lüssing linus.luessing@c0d3.blue Signed-off-by: Sven Eckelmann sven@narfation.org Signed-off-by: Simon Wunderlich sw@simonwunderlich.de Signed-off-by: Sasha Levin sashal@kernel.org --- net/batman-adv/fragmentation.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/batman-adv/fragmentation.c b/net/batman-adv/fragmentation.c index 0fddc17106bd..5b71a289d04f 100644 --- a/net/batman-adv/fragmentation.c +++ b/net/batman-adv/fragmentation.c @@ -275,7 +275,7 @@ batadv_frag_merge_packets(struct hlist_head *chain) kfree(entry);
packet = (struct batadv_frag_packet *)skb_out->data; - size = ntohs(packet->total_size); + size = ntohs(packet->total_size) + hdr_size;
/* Make room for the rest of the fragments. */ if (pskb_expand_head(skb_out, 0, size - skb_out->len, GFP_ATOMIC) < 0) {
Hello!
On 11/29/2018 08:55 AM, Sasha Levin wrote:
From: Sven Eckelmann sven@narfation.org
[ Upstream commit d7d8bbb40a5b1f682ee6589e212934f4c6b8ad60 ]
The complete size ("total_size") of the fragmented packet is stored in the fragment header and in the size of the fragment chain. When the fragments are ready for merge, the skbuff's tail of the first fragment is expanded to have enough room after the data pointer for at least total_size. This means that it gets expanded by total_size - first_skb->len.
But this is ignoring the fact that after expanding the buffer, the fragment header is pulled by from this buffer. Assuming that the tailroom of the
Pulled by what?
buffer was already 0, the buffer after the data pointer of the skbuff is now only total_size - len(fragment_header) large. When the merge function is then processing the remaining fragments, the code to copy the data over to the merged skbuff will cause an skb_over_panic when it tries to actually put enough data to fill the total_size bytes of the packet.
The size of the skb_pull must therefore also be taken into account when the buffer's tailroom is expanded.
Fixes: 610bfc6bc99b ("batman-adv: Receive fragmented packets and merge") Reported-by: Martin Weinelt martin@darmstadt.freifunk.net Co-authored-by: Linus Lüssing linus.luessing@c0d3.blue Signed-off-by: Sven Eckelmann sven@narfation.org Signed-off-by: Simon Wunderlich sw@simonwunderlich.de Signed-off-by: Sasha Levin sashal@kernel.org
[...]
MBR, Sergei
On 11/29/2018 01:00 PM, Sergei Shtylyov wrote:
From: Sven Eckelmann sven@narfation.org
[ Upstream commit d7d8bbb40a5b1f682ee6589e212934f4c6b8ad60 ]
The complete size ("total_size") of the fragmented packet is stored in the fragment header and in the size of the fragment chain. When the fragments are ready for merge, the skbuff's tail of the first fragment is expanded to have enough room after the data pointer for at least total_size. This means that it gets expanded by total_size - first_skb->len.
But this is ignoring the fact that after expanding the buffer, the fragment header is pulled by from this buffer. Assuming that the tailroom of the
Pulled by what?
Oops, this was a -stable patch! Nevermind then. :-)
buffer was already 0, the buffer after the data pointer of the skbuff is now only total_size - len(fragment_header) large. When the merge function is then processing the remaining fragments, the code to copy the data over to the merged skbuff will cause an skb_over_panic when it tries to actually put enough data to fill the total_size bytes of the packet.
The size of the skb_pull must therefore also be taken into account when the buffer's tailroom is expanded.
Fixes: 610bfc6bc99b ("batman-adv: Receive fragmented packets and merge") Reported-by: Martin Weinelt martin@darmstadt.freifunk.net Co-authored-by: Linus Lüssing linus.luessing@c0d3.blue Signed-off-by: Sven Eckelmann sven@narfation.org Signed-off-by: Simon Wunderlich sw@simonwunderlich.de Signed-off-by: Sasha Levin sashal@kernel.org
[...]
MBR, Sergei
From: Filippo Sironi sironi@amazon.de
[ Upstream commit ab99be4683d9db33b100497d463274ebd23bd67e ]
This register should have been programmed with the physical address of the memory location containing the shadow tail pointer for the guest virtual APIC log instead of the base address.
Fixes: 8bda0cfbdc1a ('iommu/amd: Detect and initialize guest vAPIC log') Signed-off-by: Filippo Sironi sironi@amazon.de Signed-off-by: Wei Wang wawei@amazon.de Signed-off-by: Suravee Suthikulpanit suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/amd_iommu_init.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c index 84b3e4445d46..e062ab9687c7 100644 --- a/drivers/iommu/amd_iommu_init.c +++ b/drivers/iommu/amd_iommu_init.c @@ -797,7 +797,8 @@ static int iommu_init_ga_log(struct amd_iommu *iommu) entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET, &entry, sizeof(entry)); - entry = (iommu_virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; + entry = (iommu_virt_to_phys(iommu->ga_log_tail) & + (BIT_ULL(52)-1)) & ~7ULL; memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET, &entry, sizeof(entry)); writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
From: Sudarsana Reddy Kalluru sudarsana.kalluru@cavium.com
[ Upstream commit 77e461d14ed141253573eeeb4d34eccc51e38328 ]
Driver assigns DMAE channel 0 for FW as part of START_RAMROD command. FW uses this channel for DMAE operations (e.g., TIME_SYNC implementation). Driver also uses the same channel 0 for DMAE operations for some of the PFs (e.g., PF0 on Port0). This could lead to concurrent access to the DMAE channel by FW and driver which is not legal. Hence need to assign unique DMAE id for FW. Currently following DMAE channels are used by the clients, MFW - OCBB/OCSD functionality uses DMAE channel 14/15 Driver 0-3 and 8-11 (for PF dmae operations) 4 and 12 (for stats requests) Assigning unique dmae_id '13' to the FW.
Changes from previous version: ------------------------------ v2: Incorporated the review comments.
Signed-off-by: Sudarsana Reddy Kalluru Sudarsana.Kalluru@cavium.com Signed-off-by: Michal Kalderon Michal.Kalderon@cavium.com Signed-off-by: David S. Miller davem@davemloft.net
Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/bnx2x/bnx2x.h | 7 +++++++ drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c | 1 + 2 files changed, 8 insertions(+)
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h index be1506169076..0de487a8f0eb 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h @@ -2191,6 +2191,13 @@ void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func, u8 idu_sb_id, #define PMF_DMAE_C(bp) (BP_PORT(bp) * MAX_DMAE_C_PER_PORT + \ E1HVN_MAX)
+/* Following is the DMAE channel number allocation for the clients. + * MFW: OCBB/OCSD implementations use DMAE channels 14/15 respectively. + * Driver: 0-3 and 8-11 (for PF dmae operations) + * 4 and 12 (for stats requests) + */ +#define BNX2X_FW_DMAE_C 13 /* Channel for FW DMAE operations */ + /* PCIE link and speed */ #define PCICFG_LINK_WIDTH 0x1f00000 #define PCICFG_LINK_WIDTH_SHIFT 20 diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c index 3f4d2c8da21a..a9eaaf3e73a4 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c @@ -6149,6 +6149,7 @@ static inline int bnx2x_func_send_start(struct bnx2x *bp, rdata->sd_vlan_tag = cpu_to_le16(start_params->sd_vlan_tag); rdata->path_id = BP_PATH(bp); rdata->network_cos_mode = start_params->network_cos_mode; + rdata->dmae_cmd_id = BNX2X_FW_DMAE_C;
rdata->vxlan_dst_port = cpu_to_le16(start_params->vxlan_dst_port); rdata->geneve_dst_port = cpu_to_le16(start_params->geneve_dst_port);
From: Denis Bolotin denis.bolotin@cavium.com
[ Upstream commit 9aaa4e8ba12972d674caeefbc5f88d83235dd697 ]
Release PTT before entering error flow.
Signed-off-by: Denis Bolotin denis.bolotin@cavium.com Signed-off-by: Michal Kalderon michal.kalderon@cavium.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/qlogic/qed/qed_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c index 2094d86a7a08..cf3b0e3dc350 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_main.c +++ b/drivers/net/ethernet/qlogic/qed/qed_main.c @@ -1634,9 +1634,9 @@ static int qed_drain(struct qed_dev *cdev) return -EBUSY; } rc = qed_mcp_drain(hwfn, ptt); + qed_ptt_release(hwfn, ptt); if (rc) return rc; - qed_ptt_release(hwfn, ptt); }
return 0;
From: Denis Bolotin denis.bolotin@cavium.com
[ Upstream commit e90202ed1cf9672c48a363c84a929932ebfe6fc0 ]
The TC received from APP TLV is stored in offload_tc, and should not be set by protocols which did not receive an APP TLV. Fixed the condition when overriding the offload_tc.
Signed-off-by: Denis Bolotin denis.bolotin@cavium.com Signed-off-by: Michal Kalderon michal.kalderon@cavium.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/qlogic/qed/qed_dcbx.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c index f5459de6d60a..5900a506bf8d 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c +++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c @@ -191,7 +191,7 @@ qed_dcbx_dp_protocol(struct qed_hwfn *p_hwfn, struct qed_dcbx_results *p_data) static void qed_dcbx_set_params(struct qed_dcbx_results *p_data, struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, - bool enable, u8 prio, u8 tc, + bool app_tlv, bool enable, u8 prio, u8 tc, enum dcbx_protocol_type type, enum qed_pci_personality personality) { @@ -210,7 +210,7 @@ qed_dcbx_set_params(struct qed_dcbx_results *p_data, p_data->arr[type].dont_add_vlan0 = true;
/* QM reconf data */ - if (p_hwfn->hw_info.personality == personality) + if (app_tlv && p_hwfn->hw_info.personality == personality) qed_hw_info_set_offload_tc(&p_hwfn->hw_info, tc);
/* Configure dcbx vlan priority in doorbell block for roce EDPM */ @@ -225,7 +225,7 @@ qed_dcbx_set_params(struct qed_dcbx_results *p_data, static void qed_dcbx_update_app_info(struct qed_dcbx_results *p_data, struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, - bool enable, u8 prio, u8 tc, + bool app_tlv, bool enable, u8 prio, u8 tc, enum dcbx_protocol_type type) { enum qed_pci_personality personality; @@ -240,7 +240,7 @@ qed_dcbx_update_app_info(struct qed_dcbx_results *p_data,
personality = qed_dcbx_app_update[i].personality;
- qed_dcbx_set_params(p_data, p_hwfn, p_ptt, enable, + qed_dcbx_set_params(p_data, p_hwfn, p_ptt, app_tlv, enable, prio, tc, type, personality); } } @@ -318,8 +318,8 @@ qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, enable = true; }
- qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable, - priority, tc, type); + qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, true, + enable, priority, tc, type); } }
@@ -340,7 +340,7 @@ qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, continue;
enable = (type == DCBX_PROTOCOL_ETH) ? false : !!dcbx_version; - qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable, + qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, false, enable, priority, tc, type); }
From: Michal Kalderon michal.kalderon@cavium.com
[ Upstream commit 291d57f67d2449737d1e370ab5b9a583818eaa0c ]
Certain flows need to access the rdma-info structure, for example dcbx update flows. In some cases there can be a race between the allocation or deallocation of the structure which was done in roce start / roce stop and an asynchrounous dcbx event that tries to access the structure. For this reason, we move the allocation of the rdma_info structure to be similar to the iscsi/fcoe info structures which are allocated during device setup. We add a new field of "active" to the struct to define whether roce has already been started or not, and this is checked instead of whether the pointer to the info structure.
Fixes: 51ff17251c9c ("qed: Add support for RoCE hw init") Signed-off-by: Michal Kalderon michal.kalderon@cavium.com Signed-off-by: Denis Bolotin denis.bolotin@cavium.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/qlogic/qed/qed_dev.c | 15 +++++-- drivers/net/ethernet/qlogic/qed/qed_rdma.c | 50 +++++++++++++--------- drivers/net/ethernet/qlogic/qed/qed_rdma.h | 5 +++ 3 files changed, 45 insertions(+), 25 deletions(-)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c index 97f073fd3725..9d2d18c32162 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c @@ -179,6 +179,10 @@ void qed_resc_free(struct qed_dev *cdev) qed_iscsi_free(p_hwfn); qed_ooo_free(p_hwfn); } + + if (QED_IS_RDMA_PERSONALITY(p_hwfn)) + qed_rdma_info_free(p_hwfn); + qed_iov_free(p_hwfn); qed_l2_free(p_hwfn); qed_dmae_info_free(p_hwfn); @@ -1074,6 +1078,12 @@ int qed_resc_alloc(struct qed_dev *cdev) goto alloc_err; }
+ if (QED_IS_RDMA_PERSONALITY(p_hwfn)) { + rc = qed_rdma_info_alloc(p_hwfn); + if (rc) + goto alloc_err; + } + /* DMA info initialization */ rc = qed_dmae_info_alloc(p_hwfn); if (rc) @@ -2091,11 +2101,8 @@ int qed_hw_start_fastpath(struct qed_hwfn *p_hwfn) if (!p_ptt) return -EAGAIN;
- /* If roce info is allocated it means roce is initialized and should - * be enabled in searcher. - */ if (p_hwfn->p_rdma_info && - p_hwfn->b_rdma_enabled_in_prs) + p_hwfn->p_rdma_info->active && p_hwfn->b_rdma_enabled_in_prs) qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0x1);
/* Re-open incoming traffic */ diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.c b/drivers/net/ethernet/qlogic/qed/qed_rdma.c index 62113438c880..7873d6dfd91f 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_rdma.c +++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.c @@ -140,22 +140,34 @@ static u32 qed_rdma_get_sb_id(void *p_hwfn, u32 rel_sb_id) return FEAT_NUM((struct qed_hwfn *)p_hwfn, QED_PF_L2_QUE) + rel_sb_id; }
-static int qed_rdma_alloc(struct qed_hwfn *p_hwfn, - struct qed_ptt *p_ptt, - struct qed_rdma_start_in_params *params) +int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) { struct qed_rdma_info *p_rdma_info; - u32 num_cons, num_tasks; - int rc = -ENOMEM;
- DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocating RDMA\n"); - - /* Allocate a struct with current pf rdma info */ p_rdma_info = kzalloc(sizeof(*p_rdma_info), GFP_KERNEL); if (!p_rdma_info) - return rc; + return -ENOMEM; + + spin_lock_init(&p_rdma_info->lock);
p_hwfn->p_rdma_info = p_rdma_info; + return 0; +} + +void qed_rdma_info_free(struct qed_hwfn *p_hwfn) +{ + kfree(p_hwfn->p_rdma_info); + p_hwfn->p_rdma_info = NULL; +} + +static int qed_rdma_alloc(struct qed_hwfn *p_hwfn) +{ + struct qed_rdma_info *p_rdma_info = p_hwfn->p_rdma_info; + u32 num_cons, num_tasks; + int rc = -ENOMEM; + + DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocating RDMA\n"); + if (QED_IS_IWARP_PERSONALITY(p_hwfn)) p_rdma_info->proto = PROTOCOLID_IWARP; else @@ -183,7 +195,7 @@ static int qed_rdma_alloc(struct qed_hwfn *p_hwfn, /* Allocate a struct with device params and fill it */ p_rdma_info->dev = kzalloc(sizeof(*p_rdma_info->dev), GFP_KERNEL); if (!p_rdma_info->dev) - goto free_rdma_info; + return rc;
/* Allocate a struct with port params and fill it */ p_rdma_info->port = kzalloc(sizeof(*p_rdma_info->port), GFP_KERNEL); @@ -298,8 +310,6 @@ static int qed_rdma_alloc(struct qed_hwfn *p_hwfn, kfree(p_rdma_info->port); free_rdma_dev: kfree(p_rdma_info->dev); -free_rdma_info: - kfree(p_rdma_info);
return rc; } @@ -370,8 +380,6 @@ static void qed_rdma_resc_free(struct qed_hwfn *p_hwfn)
kfree(p_rdma_info->port); kfree(p_rdma_info->dev); - - kfree(p_rdma_info); }
static void qed_rdma_free_tid(void *rdma_cxt, u32 itid) @@ -679,8 +687,6 @@ static int qed_rdma_setup(struct qed_hwfn *p_hwfn,
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "RDMA setup\n");
- spin_lock_init(&p_hwfn->p_rdma_info->lock); - qed_rdma_init_devinfo(p_hwfn, params); qed_rdma_init_port(p_hwfn); qed_rdma_init_events(p_hwfn, params); @@ -727,7 +733,7 @@ static int qed_rdma_stop(void *rdma_cxt) /* Disable RoCE search */ qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0); p_hwfn->b_rdma_enabled_in_prs = false; - + p_hwfn->p_rdma_info->active = 0; qed_wr(p_hwfn, p_ptt, PRS_REG_ROCE_DEST_QP_MAX_PF, 0);
ll2_ethertype_en = qed_rd(p_hwfn, p_ptt, PRS_REG_LIGHT_L2_ETHERTYPE_EN); @@ -1236,7 +1242,8 @@ qed_rdma_create_qp(void *rdma_cxt, u8 max_stats_queues; int rc;
- if (!rdma_cxt || !in_params || !out_params || !p_hwfn->p_rdma_info) { + if (!rdma_cxt || !in_params || !out_params || + !p_hwfn->p_rdma_info->active) { DP_ERR(p_hwfn->cdev, "qed roce create qp failed due to NULL entry (rdma_cxt=%p, in=%p, out=%p, roce_info=?\n", rdma_cxt, in_params, out_params); @@ -1802,8 +1809,8 @@ bool qed_rdma_allocated_qps(struct qed_hwfn *p_hwfn) { bool result;
- /* if rdma info has not been allocated, naturally there are no qps */ - if (!p_hwfn->p_rdma_info) + /* if rdma wasn't activated yet, naturally there are no qps */ + if (!p_hwfn->p_rdma_info->active) return false;
spin_lock_bh(&p_hwfn->p_rdma_info->lock); @@ -1849,7 +1856,7 @@ static int qed_rdma_start(void *rdma_cxt, if (!p_ptt) goto err;
- rc = qed_rdma_alloc(p_hwfn, p_ptt, params); + rc = qed_rdma_alloc(p_hwfn); if (rc) goto err1;
@@ -1858,6 +1865,7 @@ static int qed_rdma_start(void *rdma_cxt, goto err2;
qed_ptt_release(p_hwfn, p_ptt); + p_hwfn->p_rdma_info->active = 1;
return rc;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.h b/drivers/net/ethernet/qlogic/qed/qed_rdma.h index 6f722ee8ee94..50d609c0e108 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_rdma.h +++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.h @@ -102,6 +102,7 @@ struct qed_rdma_info { u16 max_queue_zones; enum protocol_type proto; struct qed_iwarp_info iwarp; + u8 active:1; };
struct qed_rdma_qp { @@ -176,10 +177,14 @@ struct qed_rdma_qp { #if IS_ENABLED(CONFIG_QED_RDMA) void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); +int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn); +void qed_rdma_info_free(struct qed_hwfn *p_hwfn); #else static inline void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) {} static inline void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) {} +static inline int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) {return -EINVAL} +static inline void qed_rdma_info_free(struct qed_hwfn *p_hwfn) {} #endif
int
From: Denis Bolotin denis.bolotin@cavium.com
[ Upstream commit ed4eac20dcffdad47709422e0cb925981b056668 ]
The value of "sb_index" is written by the hardware. Reading its value and writing it to "index" must finish before checking the loop condition.
Signed-off-by: Denis Bolotin denis.bolotin@cavium.com Signed-off-by: Michal Kalderon michal.kalderon@cavium.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/qlogic/qed/qed_int.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.c b/drivers/net/ethernet/qlogic/qed/qed_int.c index 0f0aba793352..b22f464ea3fa 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_int.c +++ b/drivers/net/ethernet/qlogic/qed/qed_int.c @@ -992,6 +992,8 @@ static int qed_int_attentions(struct qed_hwfn *p_hwfn) */ do { index = p_sb_attn->sb_index; + /* finish reading index before the loop condition */ + dma_rmb(); attn_bits = le32_to_cpu(p_sb_attn->atten_bits); attn_acks = le32_to_cpu(p_sb_attn->atten_ack); } while (index != p_sb_attn->sb_index);
From: Dan Carpenter dan.carpenter@oracle.com
[ Upstream commit 3c135e8900199e3b9375c1eff808cceba2ee37de ]
We added some error handling to this function but forgot to set the error code on this path.
Fixes: ecd29dabb2ba ("usb: dwc2: pci: Handle error cleanup in probe") Acked-by: Minas Harutyunyan hminas@synopsys.com Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Signed-off-by: Felipe Balbi felipe.balbi@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/dwc2/pci.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/usb/dwc2/pci.c b/drivers/usb/dwc2/pci.c index d257c541e51b..7afc10872f1f 100644 --- a/drivers/usb/dwc2/pci.c +++ b/drivers/usb/dwc2/pci.c @@ -120,6 +120,7 @@ static int dwc2_pci_probe(struct pci_dev *pci, dwc2 = platform_device_alloc("dwc2", PLATFORM_DEVID_AUTO); if (!dwc2) { dev_err(dev, "couldn't allocate dwc2 device\n"); + ret = -ENOMEM; goto err; }
From: Shen Jing jingx.shen@intel.com
[ Upstream commit a9c859033f6ec772f8e3228c343bb1321584ae0e ]
This reverts commit b4194da3f9087dd38d91b40f9bec42d59ce589a8 since it causes list corruption followed by kernel panic:
Workqueue: adb ffs_aio_cancel_worker RIP: 0010:__list_add_valid+0x4d/0x70 Call Trace: insert_work+0x47/0xb0 __queue_work+0xf6/0x400 queue_work_on+0x65/0x70 dwc3_gadget_giveback+0x44/0x50 [dwc3] dwc3_gadget_ep_dequeue+0x83/0x2d0 [dwc3] ? finish_wait+0x80/0x80 usb_ep_dequeue+0x1e/0x90 process_one_work+0x18c/0x3b0 worker_thread+0x3c/0x390 ? process_one_work+0x3b0/0x3b0 kthread+0x11e/0x140 ? kthread_create_worker_on_cpu+0x70/0x70 ret_from_fork+0x3a/0x50
This issue is seen with warm reboot stability testing.
Signed-off-by: Shen Jing jingx.shen@intel.com Signed-off-by: Saranya Gopal saranya.gopal@intel.com Signed-off-by: Felipe Balbi felipe.balbi@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/gadget/function/f_fs.c | 26 ++++++++------------------ 1 file changed, 8 insertions(+), 18 deletions(-)
diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c index 3ada83d81bda..31e8bf3578c8 100644 --- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c @@ -215,7 +215,6 @@ struct ffs_io_data {
struct mm_struct *mm; struct work_struct work; - struct work_struct cancellation_work;
struct usb_ep *ep; struct usb_request *req; @@ -1073,31 +1072,22 @@ ffs_epfile_open(struct inode *inode, struct file *file) return 0; }
-static void ffs_aio_cancel_worker(struct work_struct *work) -{ - struct ffs_io_data *io_data = container_of(work, struct ffs_io_data, - cancellation_work); - - ENTER(); - - usb_ep_dequeue(io_data->ep, io_data->req); -} - static int ffs_aio_cancel(struct kiocb *kiocb) { struct ffs_io_data *io_data = kiocb->private; - struct ffs_data *ffs = io_data->ffs; + struct ffs_epfile *epfile = kiocb->ki_filp->private_data; int value;
ENTER();
- if (likely(io_data && io_data->ep && io_data->req)) { - INIT_WORK(&io_data->cancellation_work, ffs_aio_cancel_worker); - queue_work(ffs->io_completion_wq, &io_data->cancellation_work); - value = -EINPROGRESS; - } else { + spin_lock_irq(&epfile->ffs->eps_lock); + + if (likely(io_data && io_data->ep && io_data->req)) + value = usb_ep_dequeue(io_data->ep, io_data->req); + else value = -EINVAL; - } + + spin_unlock_irq(&epfile->ffs->eps_lock);
return value; }
From: Ursula Braun ubraun@linux.ibm.com
[ Upstream commit 007b656851ed7f94ba0fa358ac3e5d7705da6846 ]
SMC-D stress workload showed connection stalls. Since the firmware decides to skip raising an interrupt if the SBA DMBE mask bit is still set, this SBA DMBE mask bit should be cleared before the IRQ handling in the SMC code runs. Otherwise there are small windows possible with missing interrupts for incoming data. SMC-D currently does not care about the old value of the SBA DMBE mask.
Acked-by: Sebastian Ott sebott@linux.ibm.com Signed-off-by: Ursula Braun ubraun@linux.ibm.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/s390/net/ism_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c index c0631895154e..8684bcec8ff4 100644 --- a/drivers/s390/net/ism_drv.c +++ b/drivers/s390/net/ism_drv.c @@ -415,9 +415,9 @@ static irqreturn_t ism_handle_irq(int irq, void *data) break;
clear_bit_inv(bit, bv); + ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0; barrier(); smcd_handle_irq(ism->smcd, bit + ISM_DMB_BIT_OFFSET); - ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0; }
if (ism->sba->e) {
From: James Smart jsmart2021@gmail.com
[ Upstream commit 4cff280a5fccf6513ed9e895bb3a4e7ad8b0cedc ]
If an io error occurs on an io issued while connecting, recovery of the io falls flat as the state checking ends up nooping the error handler.
Create an err_work work item that is scheduled upon an io error while connecting. The work thread terminates all io on all queues and marks the queues as not connected. The termination of the io will return back to the callee, which will then back out of the connection attempt and will reschedule, if possible, the connection attempt.
The changes: - in case there are several commands hitting the error handler, a state flag is kept so that the error work is only scheduled once, on the first error. The subsequent errors can be ignored. - The calling sequence to stop keep alive and terminate the queues and their io is lifted from the reset routine. Made a small service routine used by both reset and err_work. - During debugging, found that the teardown path can reference an uninitialized pointer, resulting in a NULL pointer oops. The aen_ops weren't initialized yet. Add validation on their initialization before calling the teardown routine.
Signed-off-by: James Smart jsmart2021@gmail.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/fc.c | 73 ++++++++++++++++++++++++++++++++++++------ 1 file changed, 63 insertions(+), 10 deletions(-)
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index 611e70cae754..9375fa705d82 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -144,6 +144,7 @@ struct nvme_fc_ctrl {
bool ioq_live; bool assoc_active; + atomic_t err_work_active; u64 association_id;
struct list_head ctrl_list; /* rport->ctrl_list */ @@ -152,6 +153,7 @@ struct nvme_fc_ctrl { struct blk_mq_tag_set tag_set;
struct delayed_work connect_work; + struct work_struct err_work;
struct kref ref; u32 flags; @@ -1523,6 +1525,10 @@ nvme_fc_abort_aen_ops(struct nvme_fc_ctrl *ctrl) struct nvme_fc_fcp_op *aen_op = ctrl->aen_ops; int i;
+ /* ensure we've initialized the ops once */ + if (!(aen_op->flags & FCOP_FLAGS_AEN)) + return; + for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++) __nvme_fc_abort_op(ctrl, aen_op); } @@ -2036,7 +2042,25 @@ nvme_fc_nvme_ctrl_freed(struct nvme_ctrl *nctrl) static void nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg) { - /* only proceed if in LIVE state - e.g. on first error */ + int active; + + /* + * if an error (io timeout, etc) while (re)connecting, + * it's an error on creating the new association. + * Start the error recovery thread if it hasn't already + * been started. It is expected there could be multiple + * ios hitting this path before things are cleaned up. + */ + if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) { + active = atomic_xchg(&ctrl->err_work_active, 1); + if (!active && !schedule_work(&ctrl->err_work)) { + atomic_set(&ctrl->err_work_active, 0); + WARN_ON(1); + } + return; + } + + /* Otherwise, only proceed if in LIVE state - e.g. on first error */ if (ctrl->ctrl.state != NVME_CTRL_LIVE) return;
@@ -2802,6 +2826,7 @@ nvme_fc_delete_ctrl(struct nvme_ctrl *nctrl) { struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl);
+ cancel_work_sync(&ctrl->err_work); cancel_delayed_work_sync(&ctrl->connect_work); /* * kill the association on the link side. this will block @@ -2854,23 +2879,30 @@ nvme_fc_reconnect_or_delete(struct nvme_fc_ctrl *ctrl, int status) }
static void -nvme_fc_reset_ctrl_work(struct work_struct *work) +__nvme_fc_terminate_io(struct nvme_fc_ctrl *ctrl) { - struct nvme_fc_ctrl *ctrl = - container_of(work, struct nvme_fc_ctrl, ctrl.reset_work); - int ret; - - nvme_stop_ctrl(&ctrl->ctrl); + nvme_stop_keep_alive(&ctrl->ctrl);
/* will block will waiting for io to terminate */ nvme_fc_delete_association(ctrl);
- if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { + if (ctrl->ctrl.state != NVME_CTRL_CONNECTING && + !nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) dev_err(ctrl->ctrl.device, "NVME-FC{%d}: error_recovery: Couldn't change state " "to CONNECTING\n", ctrl->cnum); - return; - } +} + +static void +nvme_fc_reset_ctrl_work(struct work_struct *work) +{ + struct nvme_fc_ctrl *ctrl = + container_of(work, struct nvme_fc_ctrl, ctrl.reset_work); + int ret; + + __nvme_fc_terminate_io(ctrl); + + nvme_stop_ctrl(&ctrl->ctrl);
if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE) ret = nvme_fc_create_association(ctrl); @@ -2885,6 +2917,24 @@ nvme_fc_reset_ctrl_work(struct work_struct *work) ctrl->cnum); }
+static void +nvme_fc_connect_err_work(struct work_struct *work) +{ + struct nvme_fc_ctrl *ctrl = + container_of(work, struct nvme_fc_ctrl, err_work); + + __nvme_fc_terminate_io(ctrl); + + atomic_set(&ctrl->err_work_active, 0); + + /* + * Rescheduling the connection after recovering + * from the io error is left to the reconnect work + * item, which is what should have stalled waiting on + * the io that had the error that scheduled this work. + */ +} + static const struct nvme_ctrl_ops nvme_fc_ctrl_ops = { .name = "fc", .module = THIS_MODULE, @@ -2995,6 +3045,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, ctrl->cnum = idx; ctrl->ioq_live = false; ctrl->assoc_active = false; + atomic_set(&ctrl->err_work_active, 0); init_waitqueue_head(&ctrl->ioabort_wait);
get_device(ctrl->dev); @@ -3002,6 +3053,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
INIT_WORK(&ctrl->ctrl.reset_work, nvme_fc_reset_ctrl_work); INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work); + INIT_WORK(&ctrl->err_work, nvme_fc_connect_err_work); spin_lock_init(&ctrl->lock);
/* io queue count */ @@ -3092,6 +3144,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, fail_ctrl: nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING); cancel_work_sync(&ctrl->ctrl.reset_work); + cancel_work_sync(&ctrl->err_work); cancel_delayed_work_sync(&ctrl->connect_work);
ctrl->ctrl.opts = NULL;
From: Vasundhara Volam vasundhara-v.volam@broadcom.com
[ Upstream commit 8dc5ae2d48976764cf3498e97963fa06befefb0e ]
Fix the year and month offset while storing it in bnxt_fill_coredump_record().
Fixes: 6c5657d085ae ("bnxt_en: Add support for ethtool get dump.") Signed-off-by: Vasundhara Volam vasundhara-v.volam@broadcom.com Signed-off-by: Michael Chan michael.chan@broadcom.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index e52d7af3ab3e..da9b87689996 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -2862,8 +2862,8 @@ bnxt_fill_coredump_record(struct bnxt *bp, struct bnxt_coredump_record *record, record->asic_state = 0; strlcpy(record->system_name, utsname()->nodename, sizeof(record->system_name)); - record->year = cpu_to_le16(tm.tm_year); - record->month = cpu_to_le16(tm.tm_mon); + record->year = cpu_to_le16(tm.tm_year + 1900); + record->month = cpu_to_le16(tm.tm_mon + 1); record->day = cpu_to_le16(tm.tm_mday); record->hour = cpu_to_le16(tm.tm_hour); record->minute = cpu_to_le16(tm.tm_min);
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 69756c6ff0de478c10100481f16c966dde3b5339 ]
[Why] Many panels support more than 8bpc but some modes are unavailable while running at greater than 8bpc due to DP/HDMI bandwidth constraints.
Support for more than 8bpc was added recently in the driver but it defaults to the maximum supported bpc - locking out these modes.
This should be a user configurable option such that the user can select what bpc configuration they would like.
[How] This patch introduces the "max bpc" amdgpu driver specific connector property so the user can limit the maximum bpc. It ranges from 8 to 16.
This doesn't directly set the preferred bpc for the panel since it follows Intel's existing driver conventions.
This proprety should be removed once common drm support for max bpc lands.
v2: rebase on upstream (Alex)
Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Harry Wentland harry.wentland@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 7 +++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h | 2 ++ 2 files changed, 9 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c index 6748cd7fc129..686a26de50f9 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c @@ -626,6 +626,13 @@ int amdgpu_display_modeset_create_props(struct amdgpu_device *adev) "dither", amdgpu_dither_enum_list, sz);
+ if (amdgpu_device_has_dc_support(adev)) { + adev->mode_info.max_bpc_property = + drm_property_create_range(adev->ddev, 0, "max bpc", 8, 16); + if (!adev->mode_info.max_bpc_property) + return -ENOMEM; + } + return 0; }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h index b9e9e8b02fb7..d1b4d9b6aae0 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h @@ -339,6 +339,8 @@ struct amdgpu_mode_info { struct drm_property *audio_property; /* FMT dithering */ struct drm_property *dither_property; + /* maximum number of bits per channel for monitor color */ + struct drm_property *max_bpc_property; /* hardcoded DFP edid from BIOS */ struct edid *bios_hardcoded_edid; int bios_hardcoded_edid_size;
From: Nicholas Kazlauskas nicholas.kazlauskas@amd.com
[ Upstream commit 07e3a1cfb0568b6d8d7862077029af96af6690ea ]
[Why] Many panels support more than 8bpc but some modes are unavailable while running at greater than 8bpc due to DP/HDMI bandwidth constraints.
Support for more than 8bpc was added recently in the driver but it defaults to the maximum supported bpc - locking out these modes.
This should be a user configurable option such that the user can select what bpc configuration they would like.
[How] This patch adds support for getting and setting the amdgpu driver specific "max bpc" property on the connector.
It also adds support for limiting the output bpc based on the property value. The default limitation is the lowest value in the range, 8bpc. This was the old value before the range was uncapped.
This patch should be updated/replaced later once common drm support for max bpc lands.
Bugzilla: https://bugs.freedesktop.org/108542 Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=201585 Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=200645 Fixes: e03fd3f300f6 ("drm/amd/display: Do not limit color depth to 8bpc")
v2: rebase on upstream (Alex)
Signed-off-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Harry Wentland harry.wentland@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 16 ++++++++++++++++ .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 1 + 2 files changed, 17 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index ef5c6af4d964..299def84e69c 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -2213,8 +2213,15 @@ static void update_stream_scaling_settings(const struct drm_display_mode *mode, static enum dc_color_depth convert_color_depth_from_display_info(const struct drm_connector *connector) { + struct dm_connector_state *dm_conn_state = + to_dm_connector_state(connector->state); uint32_t bpc = connector->display_info.bpc;
+ /* TODO: Remove this when there's support for max_bpc in drm */ + if (dm_conn_state && bpc > dm_conn_state->max_bpc) + /* Round down to nearest even number. */ + bpc = dm_conn_state->max_bpc - (dm_conn_state->max_bpc & 1); + switch (bpc) { case 0: /* Temporary Work around, DRM don't parse color depth for @@ -2796,6 +2803,9 @@ int amdgpu_dm_connector_atomic_set_property(struct drm_connector *connector, } else if (property == adev->mode_info.underscan_property) { dm_new_state->underscan_enable = val; ret = 0; + } else if (property == adev->mode_info.max_bpc_property) { + dm_new_state->max_bpc = val; + ret = 0; }
return ret; @@ -2838,6 +2848,9 @@ int amdgpu_dm_connector_atomic_get_property(struct drm_connector *connector, } else if (property == adev->mode_info.underscan_property) { *val = dm_state->underscan_enable; ret = 0; + } else if (property == adev->mode_info.max_bpc_property) { + *val = dm_state->max_bpc; + ret = 0; } return ret; } @@ -3658,6 +3671,9 @@ void amdgpu_dm_connector_init_helper(struct amdgpu_display_manager *dm, drm_object_attach_property(&aconnector->base.base, adev->mode_info.underscan_vborder_property, 0); + drm_object_attach_property(&aconnector->base.base, + adev->mode_info.max_bpc_property, + 0);
}
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h index aba2c5c1d2f8..74aedcffc4bb 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h @@ -213,6 +213,7 @@ struct dm_connector_state { enum amdgpu_rmx_type scaling; uint8_t underscan_vborder; uint8_t underscan_hborder; + uint8_t max_bpc; bool underscan_enable; struct mod_freesync_user_enable user_enable; bool freesync_capable;
From: Jack Morgenstein jackm@dev.mellanox.co.il
[ Upstream commit bd85fbc2038a1bbe84990b23ff69b6fc81a32b2c ]
When re-registering a user mr, the mpt information for the existing mr when running SRIOV is obtained via the QUERY_MPT fw command. The returned information includes the mpt's lkey.
This retrieved mpt information is used to move the mpt back to hardware ownership in the rereg flow (via the SW2HW_MPT fw command when running SRIOV).
The fw API spec states that for SW2HW_MPT, the lkey field must be zero. Any ConnectX-3 PF driver which checks for strict spec adherence will return failure for SW2HW_MPT if the lkey field is not zero (although the fw in practice ignores this field for SW2HW_MPT).
Thus, in order to conform to the fw API spec, set the lkey field to zero before invoking SW2HW_MPT when running SRIOV.
Fixes: e630664c8383 ("mlx4_core: Add helper functions to support MR re-registration") Signed-off-by: Jack Morgenstein jackm@dev.mellanox.co.il Signed-off-by: Tariq Toukan tariqt@mellanox.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx4/mr.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/mellanox/mlx4/mr.c b/drivers/net/ethernet/mellanox/mlx4/mr.c index 2e84f10f59ba..1a11bc0e1612 100644 --- a/drivers/net/ethernet/mellanox/mlx4/mr.c +++ b/drivers/net/ethernet/mellanox/mlx4/mr.c @@ -363,6 +363,7 @@ int mlx4_mr_hw_write_mpt(struct mlx4_dev *dev, struct mlx4_mr *mmr, container_of((void *)mpt_entry, struct mlx4_cmd_mailbox, buf);
+ (*mpt_entry)->lkey = 0; err = mlx4_SW2HW_MPT(dev, mailbox, key); }
From: Tariq Toukan tariqt@mellanox.com
[ Upstream commit 3ea7e7ea53c9f6ee41cb69a29c375fe9dd9a56a7 ]
Initialize the uid variable to zero to avoid the compilation warning.
Fixes: 7a89399ffad7 ("net/mlx4: Add mlx4_bitmap zone allocator") Signed-off-by: Tariq Toukan tariqt@mellanox.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx4/alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/alloc.c b/drivers/net/ethernet/mellanox/mlx4/alloc.c index 4bdf25059542..21788d4f9881 100644 --- a/drivers/net/ethernet/mellanox/mlx4/alloc.c +++ b/drivers/net/ethernet/mellanox/mlx4/alloc.c @@ -337,7 +337,7 @@ void mlx4_zone_allocator_destroy(struct mlx4_zone_allocator *zone_alloc) static u32 __mlx4_alloc_from_zone(struct mlx4_zone_entry *zone, int count, int align, u32 skip_mask, u32 *puid) { - u32 uid; + u32 uid = 0; u32 res; struct mlx4_zone_allocator *zone_alloc = zone->allocator; struct mlx4_zone_entry *curr_node;
From: Aya Levin ayal@mellanox.com
[ Upstream commit a463146e67c848cbab5ce706d6528281b7cded08 ]
UBSAN: Undefined behavior in drivers/net/ethernet/mellanox/mlx4/resource_tracker.c:626:29 signed integer overflow: 1802201963 + 1802201963 cannot be represented in type 'int'
The union of res_reserved and res_port_rsvd[MLX4_MAX_PORTS] monitors granting of reserved resources. The grant operation is calculated and protected, thus both members of the union cannot be negative. Changed type of res_reserved and of res_port_rsvd[MLX4_MAX_PORTS] from signed int to unsigned int, allowing large value.
Fixes: 5a0d0a6161ae ("mlx4: Structures and init/teardown for VF resource quotas") Signed-off-by: Aya Levin ayal@mellanox.com Signed-off-by: Tariq Toukan tariqt@mellanox.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx4/mlx4.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4.h b/drivers/net/ethernet/mellanox/mlx4/mlx4.h index ebcd2778eeb3..23f1b5b512c2 100644 --- a/drivers/net/ethernet/mellanox/mlx4/mlx4.h +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4.h @@ -540,8 +540,8 @@ struct slave_list { struct resource_allocator { spinlock_t alloc_lock; /* protect quotas */ union { - int res_reserved; - int res_port_rsvd[MLX4_MAX_PORTS]; + unsigned int res_reserved; + unsigned int res_port_rsvd[MLX4_MAX_PORTS]; }; union { int res_free;
From: Andrew Morton akpm@linux-foundation.org
[ Upstream commit a97b9565338350d70d8d971c4ee6f0d4fa967418 ]
Add missing semicolon.
Fixes: 291d57f67d244973 ("qed: Fix rdma_info structure allocation") Cc: Michal Kalderon michal.kalderon@cavium.com Cc: Denis Bolotin denis.bolotin@cavium.com Cc: David S. Miller davem@davemloft.net Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/qlogic/qed/qed_rdma.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.h b/drivers/net/ethernet/qlogic/qed/qed_rdma.h index 50d609c0e108..3689fe3e5935 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_rdma.h +++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.h @@ -183,7 +183,7 @@ void qed_rdma_info_free(struct qed_hwfn *p_hwfn); static inline void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) {} static inline void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) {} -static inline int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) {return -EINVAL} +static inline int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) {return -EINVAL;} static inline void qed_rdma_info_free(struct qed_hwfn *p_hwfn) {} #endif
From: Robert Jarzmik robert.jarzmik@free.fr
[ Upstream commit 70cdb6ad6dc342d9643a00c9092e88f0075f2b9a ]
As pointed out by Gregor, spitz keyboard matrix is broken, with or without CONFIG_PINCTRL set, quoting : "The gpio matrix keypard on the Zaurus C3x00 (see spitz.c) does not work properly. Noticeable are that rshift+c does nothing where as lshift+c creates C. Opposite it is for rshift+a vs lshift+a, here only rshift works. This affects a few other combinations using the rshift or lshift buttons."
As a matter of fact, as for platform_data based builds CONFIG_PINCTRL=n is required for now (as opposed for devicetree builds where it should be set), this means gpio driver should change the direction, which is what was attempted by commit c4e5ffb6f224 ("gpio: pxa: fix legacy non pinctrl aware builds").
Unfortunately, the input case was inverted, and the direction change was never done. This wasn't seen up until now because the initial platform setup (MFP) was setting this direction. Yet in Gregory's case, the matrix-keypad driver changes back and forth the direction dynamically, and this is why he's the first to report it.
Fixes: c4e5ffb6f224 ("gpio: pxa: fix legacy non pinctrl aware builds") Tested-by: Greg greguu@null.net Signed-off-by: Robert Jarzmik robert.jarzmik@free.fr Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-pxa.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c index c18712dabf93..9f3f166f1760 100644 --- a/drivers/gpio/gpio-pxa.c +++ b/drivers/gpio/gpio-pxa.c @@ -268,8 +268,8 @@ static int pxa_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
if (pxa_gpio_has_pinctrl()) { ret = pinctrl_gpio_direction_input(chip->base + offset); - if (!ret) - return 0; + if (ret) + return ret; }
spin_lock_irqsave(&gpio_lock, flags);
From: Bartosz Golaszewski brgl@bgdev.pl
[ Upstream commit bff466bac59994cfcceabe4d0be5fdc1c20cd5b8 ]
Commit 3edfb7bd76bd ("gpiolib: Show correct direction from the beginning") fixed an existing issue but broke libgpiod tests by changing the default direction of dummy lines to output.
We don't break user-space so make gpio-mockup behave as before.
Signed-off-by: Bartosz Golaszewski brgl@bgdev.pl Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-mockup.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpio/gpio-mockup.c b/drivers/gpio/gpio-mockup.c index d66b7a768ecd..945bd13e5e79 100644 --- a/drivers/gpio/gpio-mockup.c +++ b/drivers/gpio/gpio-mockup.c @@ -32,8 +32,8 @@ #define gpio_mockup_err(...) pr_err(GPIO_MOCKUP_NAME ": " __VA_ARGS__)
enum { - GPIO_MOCKUP_DIR_OUT = 0, - GPIO_MOCKUP_DIR_IN = 1, + GPIO_MOCKUP_DIR_IN = 0, + GPIO_MOCKUP_DIR_OUT = 1, };
/* @@ -135,7 +135,7 @@ static int gpio_mockup_get_direction(struct gpio_chip *gc, unsigned int offset) { struct gpio_mockup_chip *chip = gpiochip_get_data(gc);
- return chip->lines[offset].dir; + return !chip->lines[offset].dir; }
static int gpio_mockup_to_irq(struct gpio_chip *gc, unsigned int offset)
From: Lucas Bates lucasb@mojatatu.com
[ Upstream commit 5aaf6428526bcad98d6f51f2f679c919bb75d7e9 ]
Prevent exceptions from being raised while decoding output from an executed command. There is no impact on tdc's execution and the verify command phase would fail the pattern match.
Signed-off-by: Lucas Bates lucasb@mojatatu.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/tc-testing/tdc.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/tc-testing/tdc.py b/tools/testing/selftests/tc-testing/tdc.py index 87a04a8a5945..9b3f414ff1e9 100755 --- a/tools/testing/selftests/tc-testing/tdc.py +++ b/tools/testing/selftests/tc-testing/tdc.py @@ -134,9 +134,9 @@ def exec_cmd(args, pm, stage, command): (rawout, serr) = proc.communicate()
if proc.returncode != 0 and len(serr) > 0: - foutput = serr.decode("utf-8") + foutput = serr.decode("utf-8", errors="ignore") else: - foutput = rawout.decode("utf-8") + foutput = rawout.decode("utf-8", errors="ignore")
proc.stdout.close() proc.stderr.close()
From: "Brenda J. Butler" bjb@mojatatu.com
[ Upstream commit c6cecf4ae44e4ce9158ef8806358142c3512cd33 ]
Add some defensive coding in case one of the subprocesses created by tdc returns nothing. If no object is returned from exec_cmd, then tdc will halt with an unhandled exception.
Signed-off-by: Brenda J. Butler bjb@mojatatu.com Signed-off-by: Lucas Bates lucasb@mojatatu.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/tc-testing/tdc.py | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/tc-testing/tdc.py b/tools/testing/selftests/tc-testing/tdc.py index 9b3f414ff1e9..7607ba3e3cbe 100755 --- a/tools/testing/selftests/tc-testing/tdc.py +++ b/tools/testing/selftests/tc-testing/tdc.py @@ -169,6 +169,8 @@ def prepare_env(args, pm, stage, prefix, cmdlist, output = None): file=sys.stderr) print("\n{} *** Error message: "{}"".format(prefix, foutput), file=sys.stderr) + print("returncode {}; expected {}".format(proc.returncode, + exit_codes)) print("\n{} *** Aborting test run.".format(prefix), file=sys.stderr) print("\n\n{} *** stdout ***".format(proc.stdout), file=sys.stderr) print("\n\n{} *** stderr ***".format(proc.stderr), file=sys.stderr) @@ -195,12 +197,18 @@ def run_one_test(pm, args, index, tidx): print('-----> execute stage') pm.call_pre_execute() (p, procout) = exec_cmd(args, pm, 'execute', tidx["cmdUnderTest"]) - exit_code = p.returncode + if p: + exit_code = p.returncode + else: + exit_code = None + pm.call_post_execute()
- if (exit_code != int(tidx["expExitCode"])): + if (exit_code is None or exit_code != int(tidx["expExitCode"])): result = False - print("exit:", exit_code, int(tidx["expExitCode"])) + print("exit: {!r}".format(exit_code)) + print("exit: {}".format(int(tidx["expExitCode"]))) + #print("exit: {!r} {}".format(exit_code, int(tidx["expExitCode"]))) print(procout) else: if args.verbose > 0:
From: Olof Johansson olof@lixom.net
[ Upstream commit 33bf5519ae5dd356b182a94e3622f42860274a38 ]
PAGE_READ is used by RISC-V arch code included through mm headers, and it makes sense to bring in a prefix on these in the driver.
drivers/mtd/nand/raw/qcom_nandc.c:153: warning: "PAGE_READ" redefined #define PAGE_READ 0x2 In file included from include/linux/memremap.h:7, from include/linux/mm.h:27, from include/linux/scatterlist.h:8, from include/linux/dma-mapping.h:11, from drivers/mtd/nand/raw/qcom_nandc.c:17: arch/riscv/include/asm/pgtable.h:48: note: this is the location of the previous definition
Caught by riscv allmodconfig.
Signed-off-by: Olof Johansson olof@lixom.net Reviewed-by: Miquel Raynal miquel.raynal@bootlin.com Signed-off-by: Boris Brezillon boris.brezillon@bootlin.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/nand/raw/qcom_nandc.c | 32 +++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c index d1d470bb32e4..8815f3e2b718 100644 --- a/drivers/mtd/nand/raw/qcom_nandc.c +++ b/drivers/mtd/nand/raw/qcom_nandc.c @@ -151,15 +151,15 @@ #define NAND_VERSION_MINOR_SHIFT 16
/* NAND OP_CMDs */ -#define PAGE_READ 0x2 -#define PAGE_READ_WITH_ECC 0x3 -#define PAGE_READ_WITH_ECC_SPARE 0x4 -#define PROGRAM_PAGE 0x6 -#define PAGE_PROGRAM_WITH_ECC 0x7 -#define PROGRAM_PAGE_SPARE 0x9 -#define BLOCK_ERASE 0xa -#define FETCH_ID 0xb -#define RESET_DEVICE 0xd +#define OP_PAGE_READ 0x2 +#define OP_PAGE_READ_WITH_ECC 0x3 +#define OP_PAGE_READ_WITH_ECC_SPARE 0x4 +#define OP_PROGRAM_PAGE 0x6 +#define OP_PAGE_PROGRAM_WITH_ECC 0x7 +#define OP_PROGRAM_PAGE_SPARE 0x9 +#define OP_BLOCK_ERASE 0xa +#define OP_FETCH_ID 0xb +#define OP_RESET_DEVICE 0xd
/* Default Value for NAND_DEV_CMD_VLD */ #define NAND_DEV_CMD_VLD_VAL (READ_START_VLD | WRITE_START_VLD | \ @@ -692,11 +692,11 @@ static void update_rw_regs(struct qcom_nand_host *host, int num_cw, bool read)
if (read) { if (host->use_ecc) - cmd = PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE; + cmd = OP_PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE; else - cmd = PAGE_READ | PAGE_ACC | LAST_PAGE; + cmd = OP_PAGE_READ | PAGE_ACC | LAST_PAGE; } else { - cmd = PROGRAM_PAGE | PAGE_ACC | LAST_PAGE; + cmd = OP_PROGRAM_PAGE | PAGE_ACC | LAST_PAGE; }
if (host->use_ecc) { @@ -1170,7 +1170,7 @@ static int nandc_param(struct qcom_nand_host *host) * in use. we configure the controller to perform a raw read of 512 * bytes to read onfi params */ - nandc_set_reg(nandc, NAND_FLASH_CMD, PAGE_READ | PAGE_ACC | LAST_PAGE); + nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ | PAGE_ACC | LAST_PAGE); nandc_set_reg(nandc, NAND_ADDR0, 0); nandc_set_reg(nandc, NAND_ADDR1, 0); nandc_set_reg(nandc, NAND_DEV0_CFG0, 0 << CW_PER_PAGE @@ -1224,7 +1224,7 @@ static int erase_block(struct qcom_nand_host *host, int page_addr) struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
nandc_set_reg(nandc, NAND_FLASH_CMD, - BLOCK_ERASE | PAGE_ACC | LAST_PAGE); + OP_BLOCK_ERASE | PAGE_ACC | LAST_PAGE); nandc_set_reg(nandc, NAND_ADDR0, page_addr); nandc_set_reg(nandc, NAND_ADDR1, 0); nandc_set_reg(nandc, NAND_DEV0_CFG0, @@ -1255,7 +1255,7 @@ static int read_id(struct qcom_nand_host *host, int column) if (column == -1) return 0;
- nandc_set_reg(nandc, NAND_FLASH_CMD, FETCH_ID); + nandc_set_reg(nandc, NAND_FLASH_CMD, OP_FETCH_ID); nandc_set_reg(nandc, NAND_ADDR0, column); nandc_set_reg(nandc, NAND_ADDR1, 0); nandc_set_reg(nandc, NAND_FLASH_CHIP_SELECT, @@ -1276,7 +1276,7 @@ static int reset(struct qcom_nand_host *host) struct nand_chip *chip = &host->chip; struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
- nandc_set_reg(nandc, NAND_FLASH_CMD, RESET_DEVICE); + nandc_set_reg(nandc, NAND_FLASH_CMD, OP_RESET_DEVICE); nandc_set_reg(nandc, NAND_EXEC_CMD, 1);
write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL);
From: Dave Gerlach d-gerlach@ti.com
[ Upstream commit d98ccfc3948ab63152494bb6b9c17e15295c0310 ]
Currently the ti-cpufreq driver blindly registers a 'ti-cpufreq' to force the driver to probe on any platforms where the driver is built in. However, this should only happen on platforms that actually can make use of the driver. There is already functionality in place to match the SoC compatible so let's factor this out into a separate call and make sure we find a match before creating the ti-cpufreq platform device.
Reviewed-by: Johan Hovold johan@kernel.org Signed-off-by: Dave Gerlach d-gerlach@ti.com Acked-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cpufreq/ti-cpufreq.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c index 3f0e2a14895a..22b53bf26817 100644 --- a/drivers/cpufreq/ti-cpufreq.c +++ b/drivers/cpufreq/ti-cpufreq.c @@ -201,19 +201,28 @@ static const struct of_device_id ti_cpufreq_of_match[] = { {}, };
+static const struct of_device_id *ti_cpufreq_match_node(void) +{ + struct device_node *np; + const struct of_device_id *match; + + np = of_find_node_by_path("/"); + match = of_match_node(ti_cpufreq_of_match, np); + of_node_put(np); + + return match; +} + static int ti_cpufreq_probe(struct platform_device *pdev) { u32 version[VERSION_COUNT]; - struct device_node *np; const struct of_device_id *match; struct opp_table *ti_opp_table; struct ti_cpufreq_data *opp_data; const char * const reg_names[] = {"vdd", "vbb"}; int ret;
- np = of_find_node_by_path("/"); - match = of_match_node(ti_cpufreq_of_match, np); - of_node_put(np); + match = dev_get_platdata(&pdev->dev); if (!match) return -ENODEV;
@@ -290,7 +299,14 @@ static int ti_cpufreq_probe(struct platform_device *pdev)
static int ti_cpufreq_init(void) { - platform_device_register_simple("ti-cpufreq", -1, NULL, 0); + const struct of_device_id *match; + + /* Check to ensure we are on a compatible platform */ + match = ti_cpufreq_match_node(); + if (match) + platform_device_register_data(NULL, "ti-cpufreq", -1, match, + sizeof(*match)); + return 0; } module_init(ti_cpufreq_init);
From: Chanho Min chanho.min@lge.com
[ Upstream commit c22397888f1eed98cd59f0a88f2a5f6925f80e15 ]
Suspend fails due to the exec family of functions blocking the freezer. The casue is that de_thread() sleeps in TASK_UNINTERRUPTIBLE waiting for all sub-threads to die, and we have the deadlock if one of them is frozen. This also can occur with the schedule() waiting for the group thread leader to exit if it is frozen.
In our machine, it causes freeze timeout as bellows.
Freezing of tasks failed after 20.010 seconds (1 tasks refusing to freeze, wq_busy=0): setcpushares-ls D ffffffc00008ed70 0 5817 1483 0x0040000d Call trace: [<ffffffc00008ed70>] __switch_to+0x88/0xa0 [<ffffffc000d1c30c>] __schedule+0x1bc/0x720 [<ffffffc000d1ca90>] schedule+0x40/0xa8 [<ffffffc0001cd784>] flush_old_exec+0xdc/0x640 [<ffffffc000220360>] load_elf_binary+0x2a8/0x1090 [<ffffffc0001ccff4>] search_binary_handler+0x9c/0x240 [<ffffffc00021c584>] load_script+0x20c/0x228 [<ffffffc0001ccff4>] search_binary_handler+0x9c/0x240 [<ffffffc0001ce8e0>] do_execveat_common.isra.14+0x4f8/0x6e8 [<ffffffc0001cedd0>] compat_SyS_execve+0x38/0x48 [<ffffffc00008de30>] el0_svc_naked+0x24/0x28
To fix this, make de_thread() freezable. It looks safe and works fine.
Suggested-by: Oleg Nesterov oleg@redhat.com Signed-off-by: Chanho Min chanho.min@lge.com Acked-by: Oleg Nesterov oleg@redhat.com Acked-by: Pavel Machek pavel@ucw.cz Acked-by: Michal Hocko mhocko@suse.com Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/exec.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/exec.c b/fs/exec.c index 1ebf6e5a521d..6da8745857cb 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -62,6 +62,7 @@ #include <linux/oom.h> #include <linux/compat.h> #include <linux/vmalloc.h> +#include <linux/freezer.h>
#include <linux/uaccess.h> #include <asm/mmu_context.h> @@ -1083,7 +1084,7 @@ static int de_thread(struct task_struct *tsk) while (sig->notify_count) { __set_current_state(TASK_KILLABLE); spin_unlock_irq(lock); - schedule(); + freezable_schedule(); if (unlikely(__fatal_signal_pending(tsk))) goto killed; spin_lock_irq(lock); @@ -1111,7 +1112,7 @@ static int de_thread(struct task_struct *tsk) __set_current_state(TASK_KILLABLE); write_unlock_irq(&tasklist_lock); cgroup_threadgroup_change_end(tsk); - schedule(); + freezable_schedule(); if (unlikely(__fatal_signal_pending(tsk))) goto killed; }
From: Connor McAdams conmanx360@gmail.com
[ Upstream commit cce997292a5264c5342c968bbd226d7c365f03d6 ]
This patch adds a new PCI subsys ID for the ZxR, as found and tested by other users. Without a way to know if any Z's use it as well, it keeps the quirk of QUIRK_SBZ and goes through the HDA subsys test function.
Signed-off-by: Connor McAdams conmanx360@gmail.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_ca0132.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c index dffd60cebc31..cc2bb0057810 100644 --- a/sound/pci/hda/patch_ca0132.c +++ b/sound/pci/hda/patch_ca0132.c @@ -1065,6 +1065,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = { SND_PCI_QUIRK(0x1028, 0x0708, "Alienware 15 R2 2016", QUIRK_ALIENWARE), SND_PCI_QUIRK(0x1102, 0x0010, "Sound Blaster Z", QUIRK_SBZ), SND_PCI_QUIRK(0x1102, 0x0023, "Sound Blaster Z", QUIRK_SBZ), + SND_PCI_QUIRK(0x1102, 0x0033, "Sound Blaster ZxR", QUIRK_SBZ), SND_PCI_QUIRK(0x1458, 0xA016, "Recon3Di", QUIRK_R3DI), SND_PCI_QUIRK(0x1458, 0xA026, "Gigabyte G1.Sniper Z97", QUIRK_R3DI), SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI),
This patch won't break anything, but it also won't fix anything either. Not sure if that matters or not. On Thu, Nov 29, 2018 at 12:59 AM Sasha Levin sashal@kernel.org wrote:
From: Connor McAdams conmanx360@gmail.com
[ Upstream commit cce997292a5264c5342c968bbd226d7c365f03d6 ]
This patch adds a new PCI subsys ID for the ZxR, as found and tested by other users. Without a way to know if any Z's use it as well, it keeps the quirk of QUIRK_SBZ and goes through the HDA subsys test function.
Signed-off-by: Connor McAdams conmanx360@gmail.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org
sound/pci/hda/patch_ca0132.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c index dffd60cebc31..cc2bb0057810 100644 --- a/sound/pci/hda/patch_ca0132.c +++ b/sound/pci/hda/patch_ca0132.c @@ -1065,6 +1065,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = { SND_PCI_QUIRK(0x1028, 0x0708, "Alienware 15 R2 2016", QUIRK_ALIENWARE), SND_PCI_QUIRK(0x1102, 0x0010, "Sound Blaster Z", QUIRK_SBZ), SND_PCI_QUIRK(0x1102, 0x0023, "Sound Blaster Z", QUIRK_SBZ),
SND_PCI_QUIRK(0x1102, 0x0033, "Sound Blaster ZxR", QUIRK_SBZ), SND_PCI_QUIRK(0x1458, 0xA016, "Recon3Di", QUIRK_R3DI), SND_PCI_QUIRK(0x1458, 0xA026, "Gigabyte G1.Sniper Z97", QUIRK_R3DI), SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI),
-- 2.17.1
On Thu, Nov 29, 2018 at 09:51:55AM -0500, Connor McAdams wrote:
This patch won't break anything, but it also won't fix anything either. Not sure if that matters or not.
I'll remove it then, thank you.
-- Thanks, Sasha
From: David Herrmann dh.herrmann@gmail.com
[ Upstream commit 4d26d1d1e8065bb3326a7c06d5d4698e581443a9 ]
This reverts commit 336fd4f5f25157e9e8bd50e898a1bbcd99eaea46.
Please note that `strlcpy()` does *NOT* do what you think it does. strlcpy() *ALWAYS* reads the full input string, regardless of the 'length' parameter. That is, if the input is not zero-terminated, strlcpy() will *READ* beyond input boundaries. It does this, because it always returns the size it *would* copy if the target was big enough, not the truncated size it actually copied.
The original code was perfectly fine. The hid device is zero-initialized and the strncpy() functions copied up to n-1 characters. The result is always zero-terminated this way.
This is the third time someone tried to replace strncpy with strlcpy in this function, and gets it wrong. I now added a comment that should at least make people reconsider.
Signed-off-by: David Herrmann dh.herrmann@gmail.com Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/uhid.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/drivers/hid/uhid.c b/drivers/hid/uhid.c index 051639c09f72..840634e0f1e3 100644 --- a/drivers/hid/uhid.c +++ b/drivers/hid/uhid.c @@ -497,12 +497,13 @@ static int uhid_dev_create2(struct uhid_device *uhid, goto err_free; }
- len = min(sizeof(hid->name), sizeof(ev->u.create2.name)); - strlcpy(hid->name, ev->u.create2.name, len); - len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)); - strlcpy(hid->phys, ev->u.create2.phys, len); - len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)); - strlcpy(hid->uniq, ev->u.create2.uniq, len); + /* @hid is zero-initialized, strncpy() is correct, strlcpy() not */ + len = min(sizeof(hid->name), sizeof(ev->u.create2.name)) - 1; + strncpy(hid->name, ev->u.create2.name, len); + len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)) - 1; + strncpy(hid->phys, ev->u.create2.phys, len); + len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)) - 1; + strncpy(hid->uniq, ev->u.create2.uniq, len);
hid->ll_driver = &uhid_hid_driver; hid->bus = ev->u.create2.bus;
From: Rodrigo Rivas Costa rodrigorivascosta@gmail.com
[ Upstream commit 385a4886778f6d6e61eff1d4d295af332d7130e1 ]
Previously, when a HID client such as the Steam Client was running, this driver disabled its input device to avoid doubling the input events.
While it worked mostly fine, some games got confused by the idle gamepad, and switched to two player mode, or asked the user to choose which gamepad to use. Other games just crashed, probably a bug in Unity [1].
With this commit, when a HID client starts, the input device is removed; when the HID client ends the input device is recreated.
[1]: https://github.com/ValveSoftware/steam-for-linux/issues/5645
Signed-off-by: Rodrigo Rivas Costa rodrigorivascosta@gmail.com Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/hid-steam.c | 154 +++++++++++++++++++++++----------------- 1 file changed, 90 insertions(+), 64 deletions(-)
diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c index 0422ec2b13d2..dc4128bfe2ca 100644 --- a/drivers/hid/hid-steam.c +++ b/drivers/hid/hid-steam.c @@ -23,8 +23,9 @@ * In order to avoid breaking them this driver creates a layered hidraw device, * so it can detect when the client is running and then: * - it will not send any command to the controller. - * - this input device will be disabled, to avoid double input of the same + * - this input device will be removed, to avoid double input of the same * user action. + * When the client is closed, this input device will be created again. * * For additional functions, such as changing the right-pad margin or switching * the led, you can use the user-space tool at: @@ -113,7 +114,7 @@ struct steam_device { spinlock_t lock; struct hid_device *hdev, *client_hdev; struct mutex mutex; - bool client_opened, input_opened; + bool client_opened; struct input_dev __rcu *input; unsigned long quirks; struct work_struct work_connect; @@ -279,18 +280,6 @@ static void steam_set_lizard_mode(struct steam_device *steam, bool enable) } }
-static void steam_update_lizard_mode(struct steam_device *steam) -{ - mutex_lock(&steam->mutex); - if (!steam->client_opened) { - if (steam->input_opened) - steam_set_lizard_mode(steam, false); - else - steam_set_lizard_mode(steam, lizard_mode); - } - mutex_unlock(&steam->mutex); -} - static int steam_input_open(struct input_dev *dev) { struct steam_device *steam = input_get_drvdata(dev); @@ -301,7 +290,6 @@ static int steam_input_open(struct input_dev *dev) return ret;
mutex_lock(&steam->mutex); - steam->input_opened = true; if (!steam->client_opened && lizard_mode) steam_set_lizard_mode(steam, false); mutex_unlock(&steam->mutex); @@ -313,7 +301,6 @@ static void steam_input_close(struct input_dev *dev) struct steam_device *steam = input_get_drvdata(dev);
mutex_lock(&steam->mutex); - steam->input_opened = false; if (!steam->client_opened && lizard_mode) steam_set_lizard_mode(steam, true); mutex_unlock(&steam->mutex); @@ -400,7 +387,7 @@ static int steam_battery_register(struct steam_device *steam) return 0; }
-static int steam_register(struct steam_device *steam) +static int steam_input_register(struct steam_device *steam) { struct hid_device *hdev = steam->hdev; struct input_dev *input; @@ -414,17 +401,6 @@ static int steam_register(struct steam_device *steam) return 0; }
- /* - * Unlikely, but getting the serial could fail, and it is not so - * important, so make up a serial number and go on. - */ - if (steam_get_serial(steam) < 0) - strlcpy(steam->serial_no, "XXXXXXXXXX", - sizeof(steam->serial_no)); - - hid_info(hdev, "Steam Controller '%s' connected", - steam->serial_no); - input = input_allocate_device(); if (!input) return -ENOMEM; @@ -492,11 +468,6 @@ static int steam_register(struct steam_device *steam) goto input_register_fail;
rcu_assign_pointer(steam->input, input); - - /* ignore battery errors, we can live without it */ - if (steam->quirks & STEAM_QUIRK_WIRELESS) - steam_battery_register(steam); - return 0;
input_register_fail: @@ -504,27 +475,88 @@ static int steam_register(struct steam_device *steam) return ret; }
-static void steam_unregister(struct steam_device *steam) +static void steam_input_unregister(struct steam_device *steam) { struct input_dev *input; + rcu_read_lock(); + input = rcu_dereference(steam->input); + rcu_read_unlock(); + if (!input) + return; + RCU_INIT_POINTER(steam->input, NULL); + synchronize_rcu(); + input_unregister_device(input); +} + +static void steam_battery_unregister(struct steam_device *steam) +{ struct power_supply *battery;
rcu_read_lock(); - input = rcu_dereference(steam->input); battery = rcu_dereference(steam->battery); rcu_read_unlock();
- if (battery) { - RCU_INIT_POINTER(steam->battery, NULL); - synchronize_rcu(); - power_supply_unregister(battery); + if (!battery) + return; + RCU_INIT_POINTER(steam->battery, NULL); + synchronize_rcu(); + power_supply_unregister(battery); +} + +static int steam_register(struct steam_device *steam) +{ + int ret; + + /* + * This function can be called several times in a row with the + * wireless adaptor, without steam_unregister() between them, because + * another client send a get_connection_status command, for example. + * The battery and serial number are set just once per device. + */ + if (!steam->serial_no[0]) { + /* + * Unlikely, but getting the serial could fail, and it is not so + * important, so make up a serial number and go on. + */ + if (steam_get_serial(steam) < 0) + strlcpy(steam->serial_no, "XXXXXXXXXX", + sizeof(steam->serial_no)); + + hid_info(steam->hdev, "Steam Controller '%s' connected", + steam->serial_no); + + /* ignore battery errors, we can live without it */ + if (steam->quirks & STEAM_QUIRK_WIRELESS) + steam_battery_register(steam); + + mutex_lock(&steam_devices_lock); + list_add(&steam->list, &steam_devices); + mutex_unlock(&steam_devices_lock); } - if (input) { - RCU_INIT_POINTER(steam->input, NULL); - synchronize_rcu(); + + mutex_lock(&steam->mutex); + if (!steam->client_opened) { + steam_set_lizard_mode(steam, lizard_mode); + ret = steam_input_register(steam); + } else { + ret = 0; + } + mutex_unlock(&steam->mutex); + + return ret; +} + +static void steam_unregister(struct steam_device *steam) +{ + steam_battery_unregister(steam); + steam_input_unregister(steam); + if (steam->serial_no[0]) { hid_info(steam->hdev, "Steam Controller '%s' disconnected", steam->serial_no); - input_unregister_device(input); + mutex_lock(&steam_devices_lock); + list_del(&steam->list); + mutex_unlock(&steam_devices_lock); + steam->serial_no[0] = 0; } }
@@ -600,6 +632,9 @@ static int steam_client_ll_open(struct hid_device *hdev) mutex_lock(&steam->mutex); steam->client_opened = true; mutex_unlock(&steam->mutex); + + steam_input_unregister(steam); + return ret; }
@@ -609,13 +644,13 @@ static void steam_client_ll_close(struct hid_device *hdev)
mutex_lock(&steam->mutex); steam->client_opened = false; - if (steam->input_opened) - steam_set_lizard_mode(steam, false); - else - steam_set_lizard_mode(steam, lizard_mode); mutex_unlock(&steam->mutex);
hid_hw_close(steam->hdev); + if (steam->connected) { + steam_set_lizard_mode(steam, lizard_mode); + steam_input_register(steam); + } }
static int steam_client_ll_raw_request(struct hid_device *hdev, @@ -744,11 +779,6 @@ static int steam_probe(struct hid_device *hdev, } }
- mutex_lock(&steam_devices_lock); - steam_update_lizard_mode(steam); - list_add(&steam->list, &steam_devices); - mutex_unlock(&steam_devices_lock); - return 0;
hid_hw_open_fail: @@ -774,10 +804,6 @@ static void steam_remove(struct hid_device *hdev) return; }
- mutex_lock(&steam_devices_lock); - list_del(&steam->list); - mutex_unlock(&steam_devices_lock); - hid_destroy_device(steam->client_hdev); steam->client_opened = false; cancel_work_sync(&steam->work_connect); @@ -792,12 +818,14 @@ static void steam_remove(struct hid_device *hdev) static void steam_do_connect_event(struct steam_device *steam, bool connected) { unsigned long flags; + bool changed;
spin_lock_irqsave(&steam->lock, flags); + changed = steam->connected != connected; steam->connected = connected; spin_unlock_irqrestore(&steam->lock, flags);
- if (schedule_work(&steam->work_connect) == 0) + if (changed && schedule_work(&steam->work_connect) == 0) dbg_hid("%s: connected=%d event already queued\n", __func__, connected); } @@ -1019,13 +1047,8 @@ static int steam_raw_event(struct hid_device *hdev, return 0; rcu_read_lock(); input = rcu_dereference(steam->input); - if (likely(input)) { + if (likely(input)) steam_do_input_event(steam, input, data); - } else { - dbg_hid("%s: input data without connect event\n", - __func__); - steam_do_connect_event(steam, true); - } rcu_read_unlock(); break; case STEAM_EV_CONNECT: @@ -1074,7 +1097,10 @@ static int steam_param_set_lizard_mode(const char *val,
mutex_lock(&steam_devices_lock); list_for_each_entry(steam, &steam_devices, list) { - steam_update_lizard_mode(steam); + mutex_lock(&steam->mutex); + if (!steam->client_opened) + steam_set_lizard_mode(steam, lizard_mode); + mutex_unlock(&steam->mutex); } mutex_unlock(&steam_devices_lock); return 0;
From: Kai-Heng Feng kai.heng.feng@canonical.com
[ Upstream commit 12d43aacf9a74d0eb66fd0ea54ebeb79ca28940f ]
Cirque Touchpad/Pointstick combo is similar to Alps devices, it requires MT_CLS_WIN_8_DUAL to expose its pointstick as a mouse.
Signed-off-by: Kai-Heng Feng kai.heng.feng@canonical.com Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/hid-ids.h | 3 +++ drivers/hid/hid-multitouch.c | 6 ++++++ 2 files changed, 9 insertions(+)
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index a2d25055cd7f..b71c3600f470 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -271,6 +271,9 @@
#define USB_VENDOR_ID_CIDC 0x1677
+#define I2C_VENDOR_ID_CIRQUE 0x0488 +#define I2C_PRODUCT_ID_CIRQUE_121F 0x121F + #define USB_VENDOR_ID_CJTOUCH 0x24b8 #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0020 0x0020 #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0040 0x0040 diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c index da954f3f4da7..2faf5421fdd0 100644 --- a/drivers/hid/hid-multitouch.c +++ b/drivers/hid/hid-multitouch.c @@ -1822,6 +1822,12 @@ static const struct hid_device_id mt_devices[] = { MT_USB_DEVICE(USB_VENDOR_ID_CHUNGHWAT, USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH) },
+ /* Cirque devices */ + { .driver_data = MT_CLS_WIN_8_DUAL, + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, + I2C_VENDOR_ID_CIRQUE, + I2C_PRODUCT_ID_CIRQUE_121F) }, + /* CJTouch panels */ { .driver_data = MT_CLS_NSMU, MT_USB_DEVICE(USB_VENDOR_ID_CJTOUCH,
From: Thor Thayer thor.thayer@linux.intel.com
[ Upstream commit a6a66f80c85e8e20573ca03fabf32445954a88d5 ]
The current Cadence QSPI driver caused a kernel panic sporadically when writing to QSPI. The problem was caused by writing more bytes than needed because the QSPI operated on 4 bytes at a time. <snip> [ 11.202044] Unable to handle kernel paging request at virtual address bffd3000 [ 11.209254] pgd = e463054d [ 11.211948] [bffd3000] *pgd=2fffb811, *pte=00000000, *ppte=00000000 [ 11.218202] Internal error: Oops: 7 [#1] SMP ARM [ 11.222797] Modules linked in: [ 11.225844] CPU: 1 PID: 1317 Comm: systemd-hwdb Not tainted 4.17.7-d0c45cd44a8f [ 11.235796] Hardware name: Altera SOCFPGA Arria10 [ 11.240487] PC is at __raw_writesl+0x70/0xd4 [ 11.244741] LR is at cqspi_write+0x1a0/0x2cc </snip> On a page boundary limit the number of bytes copied from the tx buffer to remain within the page.
This patch uses a temporary buffer to hold the 4 bytes to write and then copies only the bytes required from the tx buffer.
Reported-by: Adrian Amborzewicz adrian.ambrozewicz@intel.com Signed-off-by: Thor Thayer thor.thayer@linux.intel.com Signed-off-by: Boris Brezillon boris.brezillon@bootlin.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/spi-nor/cadence-quadspi.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/drivers/mtd/spi-nor/cadence-quadspi.c b/drivers/mtd/spi-nor/cadence-quadspi.c index 6e9cbd1a0b6d..0806c7a81c0f 100644 --- a/drivers/mtd/spi-nor/cadence-quadspi.c +++ b/drivers/mtd/spi-nor/cadence-quadspi.c @@ -644,9 +644,23 @@ static int cqspi_indirect_write_execute(struct spi_nor *nor, loff_t to_addr, ndelay(cqspi->wr_delay);
while (remaining > 0) { + size_t write_words, mod_bytes; + write_bytes = remaining > page_size ? page_size : remaining; - iowrite32_rep(cqspi->ahb_base, txbuf, - DIV_ROUND_UP(write_bytes, 4)); + write_words = write_bytes / 4; + mod_bytes = write_bytes % 4; + /* Write 4 bytes at a time then single bytes. */ + if (write_words) { + iowrite32_rep(cqspi->ahb_base, txbuf, write_words); + txbuf += (write_words * 4); + } + if (mod_bytes) { + unsigned int temp = 0xFFFFFFFF; + + memcpy(&temp, txbuf, mod_bytes); + iowrite32(temp, cqspi->ahb_base); + txbuf += mod_bytes; + }
if (!wait_for_completion_timeout(&cqspi->transfer_complete, msecs_to_jiffies(CQSPI_TIMEOUT_MS))) { @@ -655,7 +669,6 @@ static int cqspi_indirect_write_execute(struct spi_nor *nor, loff_t to_addr, goto failwr; }
- txbuf += write_bytes; remaining -= write_bytes;
if (remaining > 0)
From: Arthur Kiyanovski akiyano@amazon.com
[ Upstream commit e76ad21d070f79e566ac46ce0b0584c3c93e1b43 ]
During resume from hibernation if ena_restore_device fails, ena_com_dev_reset() is called, and uses the readless read mechanism, which was already destroyed by the call to ena_com_mmio_reg_read_request_destroy(). This causes a NULL pointer reference.
In this commit we switch the call order of the above two functions to avoid this crash.
Fixes: d7703ddbd7c9 ("net: ena: fix rare bug when failed restart/resume is followed by driver removal") Signed-off-by: Arthur Kiyanovski akiyano@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index d906293ce07d..4b73131a0f20 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -2627,8 +2627,8 @@ static int ena_restore_device(struct ena_adapter *adapter) ena_com_abort_admin_commands(ena_dev); ena_com_wait_for_abort_completion(ena_dev); ena_com_admin_destroy(ena_dev); - ena_com_mmio_reg_read_request_destroy(ena_dev); ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE); + ena_com_mmio_reg_read_request_destroy(ena_dev); err: clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags); clear_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);
From: Trond Myklebust trond.myklebust@hammerspace.com
[ Upstream commit aeabb3c96186a0f944fc2b1f25c84d5eb3a93fa9 ]
Fix a deadlock whereby the NFSv4 state manager can get stuck in the delegation return code, waiting for a layout return to complete in another thread. If the server reboots before that other thread completes, then we need to be able to start a second state manager thread in order to perform recovery.
Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/nfs4_fs.h | 2 ++ fs/nfs/nfs4state.c | 16 +++++++++++----- 2 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h index 3a6904173214..63287d911c08 100644 --- a/fs/nfs/nfs4_fs.h +++ b/fs/nfs/nfs4_fs.h @@ -41,6 +41,8 @@ enum nfs4_client_state { NFS4CLNT_MOVED, NFS4CLNT_LEASE_MOVED, NFS4CLNT_DELEGATION_EXPIRED, + NFS4CLNT_RUN_MANAGER, + NFS4CLNT_DELEGRETURN_RUNNING, };
#define NFS4_RENEW_TIMEOUT 0x01 diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c index 18920152da14..d2f645d34eb1 100644 --- a/fs/nfs/nfs4state.c +++ b/fs/nfs/nfs4state.c @@ -1210,6 +1210,7 @@ void nfs4_schedule_state_manager(struct nfs_client *clp) struct task_struct *task; char buf[INET6_ADDRSTRLEN + sizeof("-manager") + 1];
+ set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0) return; __module_get(THIS_MODULE); @@ -2485,6 +2486,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
/* Ensure exclusive access to NFSv4 state */ do { + clear_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); if (test_bit(NFS4CLNT_PURGE_STATE, &clp->cl_state)) { section = "purge state"; status = nfs4_purge_lease(clp); @@ -2575,14 +2577,18 @@ static void nfs4_state_manager(struct nfs_client *clp) }
nfs4_end_drain_session(clp); - if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) { - nfs_client_return_marked_delegations(clp); - continue; + nfs4_clear_state_manager_bit(clp); + + if (!test_and_set_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state)) { + if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) { + nfs_client_return_marked_delegations(clp); + set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); + } + clear_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state); }
- nfs4_clear_state_manager_bit(clp); /* Did we race with an attempt to give us more work? */ - if (clp->cl_state == 0) + if (!test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state)) return; if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0) return;
From: Denis Bolotin denis.bolotin@cavium.com
[ Upstream commit 276d43f0ae963312c0cd0e2b9a85fd11ac65dfcc ]
Fix the condition which verifies that only one flag is set. The API bitmap_weight() should receive size in bits instead of bytes.
Fixes: b5a9ee7cf3be ("qed: Revise QM cofiguration") Signed-off-by: Denis Bolotin denis.bolotin@cavium.com Signed-off-by: Michal Kalderon michal.kalderon@cavium.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/qlogic/qed/qed_dev.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c index 9d2d18c32162..d2ead4eb1b9a 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c @@ -478,8 +478,11 @@ static u16 *qed_init_qm_get_idx_from_flags(struct qed_hwfn *p_hwfn, struct qed_qm_info *qm_info = &p_hwfn->qm_info;
/* Can't have multiple flags set here */ - if (bitmap_weight((unsigned long *)&pq_flags, sizeof(pq_flags)) > 1) + if (bitmap_weight((unsigned long *)&pq_flags, + sizeof(pq_flags) * BITS_PER_BYTE) > 1) { + DP_ERR(p_hwfn, "requested multiple pq flags 0x%x\n", pq_flags); goto err; + }
switch (pq_flags) { case PQ_FLAGS_RLS:
From: Denis Bolotin denis.bolotin@cavium.com
[ Upstream commit eb62cca9bee842e5b23bd0ddfb1f271ca95e8759 ]
The getter callers doesn't know the valid Physical Queues (PQ) values. This patch makes sure that a valid PQ will always be returned.
The patch consists of 3 fixes:
- When qed_init_qm_get_idx_from_flags() receives a disabled flag, it returned PQ 0, which can potentially be another function's pq. Verify that flag is enabled, otherwise return default start_pq.
- When qed_init_qm_get_idx_from_flags() receives an unknown flag, it returned NULL and could lead to a segmentation fault. Return default start_pq instead.
- A modulo operation was added to MCOS/VFS PQ getters to make sure the PQ returned is in range of the required flag.
Fixes: b5a9ee7cf3be ("qed: Revise QM cofiguration") Signed-off-by: Denis Bolotin denis.bolotin@cavium.com Signed-off-by: Michal Kalderon michal.kalderon@cavium.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/qlogic/qed/qed_dev.c | 24 +++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c index d2ead4eb1b9a..2f69ee9221c6 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_dev.c +++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c @@ -484,6 +484,11 @@ static u16 *qed_init_qm_get_idx_from_flags(struct qed_hwfn *p_hwfn, goto err; }
+ if (!(qed_get_pq_flags(p_hwfn) & pq_flags)) { + DP_ERR(p_hwfn, "pq flag 0x%x is not set\n", pq_flags); + goto err; + } + switch (pq_flags) { case PQ_FLAGS_RLS: return &qm_info->first_rl_pq; @@ -506,8 +511,7 @@ static u16 *qed_init_qm_get_idx_from_flags(struct qed_hwfn *p_hwfn, }
err: - DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags); - return NULL; + return &qm_info->start_pq; }
/* save pq index in qm info */ @@ -531,20 +535,32 @@ u16 qed_get_cm_pq_idx_mcos(struct qed_hwfn *p_hwfn, u8 tc) { u8 max_tc = qed_init_qm_get_num_tcs(p_hwfn);
+ if (max_tc == 0) { + DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n", + PQ_FLAGS_MCOS); + return p_hwfn->qm_info.start_pq; + } + if (tc > max_tc) DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc);
- return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc; + return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + (tc % max_tc); }
u16 qed_get_cm_pq_idx_vf(struct qed_hwfn *p_hwfn, u16 vf) { u16 max_vf = qed_init_qm_get_num_vfs(p_hwfn);
+ if (max_vf == 0) { + DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n", + PQ_FLAGS_VFS); + return p_hwfn->qm_info.start_pq; + } + if (vf > max_vf) DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf);
- return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf; + return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + (vf % max_vf); }
u16 qed_get_cm_pq_idx_ofld_mtc(struct qed_hwfn *p_hwfn, u8 tc)
From: Juliet Kim julietk@linux.vnet.ibm.com
[ Upstream commit a5681e20b541a507c7d4fd48ae4a4040d32ee1ef ]
This patch changes to use rtnl_lock only during a reset to avoid deadlock that could occur when a thread operating close is holding rtnl_lock and waiting for reset_lock acquired by another thread, which is waiting for rtnl_lock in order to set the number of tx/rx queues during a reset.
Also, we now setting the number of tx/rx queues during a soft reset for failover or LPM events.
Signed-off-by: Juliet Kim julietk@linux.vnet.ibm.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ibm/ibmvnic.c | 59 +++++++++++------------------- drivers/net/ethernet/ibm/ibmvnic.h | 2 +- 2 files changed, 22 insertions(+), 39 deletions(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 7661064c815b..a646de07cbdc 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -1103,20 +1103,15 @@ static int ibmvnic_open(struct net_device *netdev) return 0; }
- mutex_lock(&adapter->reset_lock); - if (adapter->state != VNIC_CLOSED) { rc = ibmvnic_login(netdev); - if (rc) { - mutex_unlock(&adapter->reset_lock); + if (rc) return rc; - }
rc = init_resources(adapter); if (rc) { netdev_err(netdev, "failed to initialize resources\n"); release_resources(adapter); - mutex_unlock(&adapter->reset_lock); return rc; } } @@ -1124,8 +1119,6 @@ static int ibmvnic_open(struct net_device *netdev) rc = __ibmvnic_open(netdev); netif_carrier_on(netdev);
- mutex_unlock(&adapter->reset_lock); - return rc; }
@@ -1269,10 +1262,8 @@ static int ibmvnic_close(struct net_device *netdev) return 0; }
- mutex_lock(&adapter->reset_lock); rc = __ibmvnic_close(netdev); ibmvnic_cleanup(netdev); - mutex_unlock(&adapter->reset_lock);
return rc; } @@ -1820,20 +1811,15 @@ static int do_reset(struct ibmvnic_adapter *adapter, return rc; } else if (adapter->req_rx_queues != old_num_rx_queues || adapter->req_tx_queues != old_num_tx_queues) { - adapter->map_id = 1; release_rx_pools(adapter); release_tx_pools(adapter); - rc = init_rx_pools(netdev); - if (rc) - return rc; - rc = init_tx_pools(netdev); - if (rc) - return rc; - release_napi(adapter); - rc = init_napi(adapter); + release_vpd_data(adapter); + + rc = init_resources(adapter); if (rc) return rc; + } else { rc = reset_tx_pools(adapter); if (rc) @@ -1917,17 +1903,8 @@ static int do_hard_reset(struct ibmvnic_adapter *adapter, adapter->state = VNIC_PROBED; return 0; } - /* netif_set_real_num_xx_queues needs to take rtnl lock here - * unless wait_for_reset is set, in which case the rtnl lock - * has already been taken before initializing the reset - */ - if (!adapter->wait_for_reset) { - rtnl_lock(); - rc = init_resources(adapter); - rtnl_unlock(); - } else { - rc = init_resources(adapter); - } + + rc = init_resources(adapter); if (rc) return rc;
@@ -1986,13 +1963,21 @@ static void __ibmvnic_reset(struct work_struct *work) struct ibmvnic_rwi *rwi; struct ibmvnic_adapter *adapter; struct net_device *netdev; + bool we_lock_rtnl = false; u32 reset_state; int rc = 0;
adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset); netdev = adapter->netdev;
- mutex_lock(&adapter->reset_lock); + /* netif_set_real_num_xx_queues needs to take rtnl lock here + * unless wait_for_reset is set, in which case the rtnl lock + * has already been taken before initializing the reset + */ + if (!adapter->wait_for_reset) { + rtnl_lock(); + we_lock_rtnl = true; + } reset_state = adapter->state;
rwi = get_next_rwi(adapter); @@ -2020,12 +2005,11 @@ static void __ibmvnic_reset(struct work_struct *work) if (rc) { netdev_dbg(adapter->netdev, "Reset failed\n"); free_all_rwi(adapter); - mutex_unlock(&adapter->reset_lock); - return; }
adapter->resetting = false; - mutex_unlock(&adapter->reset_lock); + if (we_lock_rtnl) + rtnl_unlock(); }
static int ibmvnic_reset(struct ibmvnic_adapter *adapter, @@ -4709,7 +4693,6 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
INIT_WORK(&adapter->ibmvnic_reset, __ibmvnic_reset); INIT_LIST_HEAD(&adapter->rwi_list); - mutex_init(&adapter->reset_lock); mutex_init(&adapter->rwi_lock); adapter->resetting = false;
@@ -4781,8 +4764,8 @@ static int ibmvnic_remove(struct vio_dev *dev) struct ibmvnic_adapter *adapter = netdev_priv(netdev);
adapter->state = VNIC_REMOVING; - unregister_netdev(netdev); - mutex_lock(&adapter->reset_lock); + rtnl_lock(); + unregister_netdevice(netdev);
release_resources(adapter); release_sub_crqs(adapter, 1); @@ -4793,7 +4776,7 @@ static int ibmvnic_remove(struct vio_dev *dev)
adapter->state = VNIC_REMOVED;
- mutex_unlock(&adapter->reset_lock); + rtnl_unlock(); device_remove_file(&dev->dev, &dev_attr_failover); free_netdev(netdev); dev_set_drvdata(&dev->dev, NULL); diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h index f06eec145ca6..735f481b1870 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.h +++ b/drivers/net/ethernet/ibm/ibmvnic.h @@ -1068,7 +1068,7 @@ struct ibmvnic_adapter { struct tasklet_struct tasklet; enum vnic_state state; enum ibmvnic_reset_reason reset_reason; - struct mutex reset_lock, rwi_lock; + struct mutex rwi_lock; struct list_head rwi_list; struct work_struct ibmvnic_reset; bool resetting;
From: David Abdurachmanov david.abdurachmanov@gmail.com
[ Upstream commit 0138ebb90c633f76bc71617f8f23635ce41c84fd ]
Fixes warning: 'struct module' declared inside parameter list will not be visible outside of this definition or declaration
Signed-off-by: David Abdurachmanov david.abdurachmanov@gmail.com Acked-by: Olof Johansson olof@lixom.net Signed-off-by: Palmer Dabbelt palmer@sifive.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/include/asm/module.h | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/riscv/include/asm/module.h b/arch/riscv/include/asm/module.h index 349df33808c4..cd2af4b013e3 100644 --- a/arch/riscv/include/asm/module.h +++ b/arch/riscv/include/asm/module.h @@ -8,6 +8,7 @@
#define MODULE_ARCH_VERMAGIC "riscv"
+struct module; u64 module_emit_got_entry(struct module *mod, u64 val); u64 module_emit_plt_entry(struct module *mod, u64 val);
From: Dave Chinner dchinner@redhat.com
[ Upstream commit 0929d8580071c6a1cec1a7916a8f674c243ceee1 ]
When we write into an unwritten extent via direct IO, we dirty metadata on IO completion to convert the unwritten extent to written. However, when we do the FUA optimisation checks, the inode may be clean and so we issue a FUA write into the unwritten extent. This means we then bypass the generic_write_sync() call after unwritten extent conversion has ben done and we don't force the modified metadata to stable storage.
This violates O_DSYNC semantics. The window of exposure is a single IO, as the next DIO write will see the inode has dirty metadata and hence will not use the FUA optimisation. Calling generic_write_sync() after completion of the second IO will also sync the first write and it's metadata.
Fix this by avoiding the FUA optimisation when writing to unwritten extents.
Signed-off-by: Dave Chinner dchinner@redhat.com Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/iomap.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/fs/iomap.c b/fs/iomap.c index ec15cf2ec696..fa46e3ed8f53 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -1597,12 +1597,13 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
if (iomap->flags & IOMAP_F_NEW) { need_zeroout = true; - } else { + } else if (iomap->type == IOMAP_MAPPED) { /* - * Use a FUA write if we need datasync semantics, this - * is a pure data IO that doesn't require any metadata - * updates and the underlying device supports FUA. This - * allows us to avoid cache flushes on IO completion. + * Use a FUA write if we need datasync semantics, this is a pure + * data IO that doesn't require any metadata updates (including + * after IO completion such as unwritten extent conversion) and + * the underlying device supports FUA. This allows us to avoid + * cache flushes on IO completion. */ if (!(iomap->flags & (IOMAP_F_SHARED|IOMAP_F_DIRTY)) && (dio->flags & IOMAP_DIO_WRITE_FUA) &&
From: Dave Chinner dchinner@redhat.com
[ Upstream commit b450672fb66b4a991a5b55ee24209ac7ae7690ce ]
If we are doing sub-block dio that extends EOF, we need to zero the unused tail of the block to initialise the data in it it. If we do not zero the tail of the block, then an immediate mmap read of the EOF block will expose stale data beyond EOF to userspace. Found with fsx running sub-block DIO sizes vs MAPREAD/MAPWRITE operations.
Fix this by detecting if the end of the DIO write is beyond EOF and zeroing the tail if necessary.
Signed-off-by: Dave Chinner dchinner@redhat.com Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/iomap.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/fs/iomap.c b/fs/iomap.c index fa46e3ed8f53..82e35265679d 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -1678,7 +1678,14 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, dio->submit.cookie = submit_bio(bio); } while (nr_pages);
- if (need_zeroout) { + /* + * We need to zeroout the tail of a sub-block write if the extent type + * requires zeroing or the write extends beyond EOF. If we don't zero + * the block tail in the latter case, we can expose stale data via mmap + * reads of the EOF block. + */ + if (need_zeroout || + ((dio->flags & IOMAP_DIO_WRITE) && pos >= i_size_read(inode))) { /* zero out from the end of the write to the end of the block */ pad = pos & (fs_block_size - 1); if (pad)
On Thu, Nov 29, 2018 at 12:55:43AM -0500, Sasha Levin wrote:
From: Dave Chinner dchinner@redhat.com
[ Upstream commit b450672fb66b4a991a5b55ee24209ac7ae7690ce ]
If we are doing sub-block dio that extends EOF, we need to zero the unused tail of the block to initialise the data in it it. If we do not zero the tail of the block, then an immediate mmap read of the EOF block will expose stale data beyond EOF to userspace. Found with fsx running sub-block DIO sizes vs MAPREAD/MAPWRITE operations.
Fix this by detecting if the end of the DIO write is beyond EOF and zeroing the tail if necessary.
Signed-off-by: Dave Chinner dchinner@redhat.com Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org
fs/iomap.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/fs/iomap.c b/fs/iomap.c index fa46e3ed8f53..82e35265679d 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -1678,7 +1678,14 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, dio->submit.cookie = submit_bio(bio); } while (nr_pages);
- if (need_zeroout) {
- /*
* We need to zeroout the tail of a sub-block write if the extent type
* requires zeroing or the write extends beyond EOF. If we don't zero
* the block tail in the latter case, we can expose stale data via mmap
* reads of the EOF block.
*/
- if (need_zeroout ||
/* zero out from the end of the write to the end of the block */ pad = pos & (fs_block_size - 1); if (pad)((dio->flags & IOMAP_DIO_WRITE) && pos >= i_size_read(inode))) {
Same again - what's the test plan for these cherry-picked data corruption fixes?
Cheers,
Dave.
Same again - what's the test plan for these cherry-picked data corruption fixes?
Dave,
Just to check if we are on the same page, if this was the test plan: https://www.spinics.net/lists/linux-xfs/msg20639.html
Would "no regressions from baseline" have been sufficient to validate those specific patches are solid for stable?
Thanks, Amir.
On Thu, Nov 29, 2018 at 02:36:50PM +0200, Amir Goldstein wrote:
Same again - what's the test plan for these cherry-picked data corruption fixes?
Dave,
Just to check if we are on the same page, if this was the test plan: https://www.spinics.net/lists/linux-xfs/msg20639.html
Would "no regressions from baseline" have been sufficient to validate those specific patches are solid for stable?
No, not in this case, because fsx in fstests only does 100k ops at most - it's a smoke test. This isn't sufficient to regression test fixes that, in some cases, took hundreds of millions of fsx ops to expose.
Cheers,
Dave.
From: Dave Chinner dchinner@redhat.com
[ Upstream commit 4721a6010990971440b4ffefbdf014976b8eda2f ]
When doing direct IO to a pipe for do_splice_direct(), then pipe is trivial to fill up and overflow as it can only hold 16 pages. At this point bio_iov_iter_get_pages() then returns -EFAULT, and we abort the IO submission process. Unfortunately, iomap_dio_rw() propagates the error back up the stack.
The error is converted from the EFAULT to EAGAIN in generic_file_splice_read() to tell the splice layers that the pipe is full. do_splice_direct() completely fails to handle EAGAIN errors (it aborts on error) and returns EAGAIN to the caller.
copy_file_write() then completely fails to handle EAGAIN as well, and so returns EAGAIN to userspace, having failed to copy the data it was asked to.
Avoid this whole steaming pile of fail by having iomap_dio_rw() silently swallow EFAULT errors and so do short reads.
To make matters worse, iomap_dio_actor() has a stale data exposure bug bio_iov_iter_get_pages() fails - it does not zero the tail block that it may have been left uncovered by partial IO. Fix the error handling case to drop to the sub-block zeroing rather than immmediately returning the -EFAULT error.
Signed-off-by: Dave Chinner dchinner@redhat.com Reviewed-by: Darrick J. Wong darrick.wong@oracle.com Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/iomap.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-)
diff --git a/fs/iomap.c b/fs/iomap.c index 82e35265679d..e2e4a1154c90 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -1581,7 +1581,7 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, struct bio *bio; bool need_zeroout = false; bool use_fua = false; - int nr_pages, ret; + int nr_pages, ret = 0; size_t copied = 0;
if ((pos | length | align) & ((1 << blkbits) - 1)) @@ -1646,8 +1646,14 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
ret = bio_iov_iter_get_pages(bio, &iter); if (unlikely(ret)) { + /* + * We have to stop part way through an IO. We must fall + * through to the sub-block tail zeroing here, otherwise + * this short IO may expose stale data in the tail of + * the block we haven't written data to. + */ bio_put(bio); - return copied ? copied : ret; + goto zero_tail; }
n = bio->bi_iter.bi_size; @@ -1684,6 +1690,7 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, * the block tail in the latter case, we can expose stale data via mmap * reads of the EOF block. */ +zero_tail: if (need_zeroout || ((dio->flags & IOMAP_DIO_WRITE) && pos >= i_size_read(inode))) { /* zero out from the end of the write to the end of the block */ @@ -1691,7 +1698,7 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, if (pad) iomap_dio_zero(dio, iomap, pos, fs_block_size - pad); } - return copied; + return copied ? copied : ret; }
static loff_t @@ -1866,6 +1873,15 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, dio->wait_for_completion = true; ret = 0; } + + /* + * Splicing to pipes can fail on a full pipe. We have to + * swallow this to make it look like a short IO + * otherwise the higher splice layers will completely + * mishandle the error and stop moving data. + */ + if (ret == -EFAULT) + ret = 0; break; } pos += ret;
From: Dave Chinner dchinner@redhat.com
[ Upstream commit 8c110d43c6bca4b24dd13272a9d4e0ba6f2ec957 ]
When we read the EOF page of the file via readpages, we need to zero the region beyond EOF that we either do not read or should not contain data so that mmap does not expose stale data to user applications.
However, iomap_adjust_read_range() fails to detect EOF correctly, and so fsx on 1k block size filesystems fails very quickly with mapreads exposing data beyond EOF. There are two problems here.
Firstly, when calculating the end block of the EOF byte, we have to round the size by one to avoid a block aligned EOF from reporting a block too large. i.e. a size of 1024 bytes is 1 block, which in index terms is block 0. Therefore we have to calculate the end block from (isize - 1), not isize.
The second bug is determining if the current page spans EOF, and so whether we need split it into two half, one for the IO, and the other for zeroing. Unfortunately, the code that checks whether we should split the block doesn't actually check if we span EOF, it just checks if the read spans the /offset in the page/ that EOF sits on. So it splits every read into two if EOF is not page aligned, regardless of whether we are reading the EOF block or not.
Hence we need to restrict the "does the read span EOF" check to just the page that spans EOF, not every page we read.
This patch results in correct EOF detection through readpages:
xfs_vm_readpages: dev 259:0 ino 0x43 nr_pages 24 xfs_iomap_found: dev 259:0 ino 0x43 size 0x66c00 offset 0x4f000 count 98304 type hole startoff 0x13c startblock 1368 blockcount 0x4 iomap_readpage_actor: orig pos 323584 pos 323584, length 4096, poff 0 plen 4096, isize 420864 xfs_iomap_found: dev 259:0 ino 0x43 size 0x66c00 offset 0x50000 count 94208 type hole startoff 0x140 startblock 1497 blockcount 0x5c iomap_readpage_actor: orig pos 327680 pos 327680, length 94208, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 331776 pos 331776, length 90112, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 335872 pos 335872, length 86016, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 339968 pos 339968, length 81920, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 344064 pos 344064, length 77824, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 348160 pos 348160, length 73728, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 352256 pos 352256, length 69632, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 356352 pos 356352, length 65536, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 360448 pos 360448, length 61440, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 364544 pos 364544, length 57344, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 368640 pos 368640, length 53248, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 372736 pos 372736, length 49152, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 376832 pos 376832, length 45056, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 380928 pos 380928, length 40960, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 385024 pos 385024, length 36864, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 389120 pos 389120, length 32768, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 393216 pos 393216, length 28672, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 397312 pos 397312, length 24576, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 401408 pos 401408, length 20480, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 405504 pos 405504, length 16384, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 409600 pos 409600, length 12288, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 413696 pos 413696, length 8192, poff 0 plen 4096, isize 420864 iomap_readpage_actor: orig pos 417792 pos 417792, length 4096, poff 0 plen 3072, isize 420864 iomap_readpage_actor: orig pos 420864 pos 420864, length 1024, poff 3072 plen 1024, isize 420864
As you can see, it now does full page reads until the last one which is split correctly at the block aligned EOF, reading 3072 bytes and zeroing the last 1024 bytes. The original version of the patch got this right, but it got another case wrong.
The EOF detection crossing really needs to the the original length as plen, while it starts at the end of the block, will be shortened as up-to-date blocks are found on the page. This means "orig_pos + plen" no longer points to the end of the page, and so will not correctly detect EOF crossing. Hence we have to use the length passed in to detect this partial page case:
xfs_filemap_fault: dev 259:1 ino 0x43 write_fault 0 xfs_vm_readpage: dev 259:1 ino 0x43 nr_pages 1 xfs_iomap_found: dev 259:1 ino 0x43 size 0x2cc00 offset 0x2c000 count 4096 type hole startoff 0xb0 startblock 282 blockcount 0x4 iomap_readpage_actor: orig pos 180224 pos 181248, length 4096, poff 1024 plen 2048, isize 183296 xfs_iomap_found: dev 259:1 ino 0x43 size 0x2cc00 offset 0x2cc00 count 1024 type hole startoff 0xb3 startblock 285 blockcount 0x1 iomap_readpage_actor: orig pos 183296 pos 183296, length 1024, poff 3072 plen 1024, isize 183296
Heere we see a trace where the first block on the EOF page is up to date, hence poff = 1024 bytes. The offset into the page of EOF is 3072, so the range we want to read is 1024 - 3071, and the range we want to zero is 3072 - 4095. You can see this is split correctly now.
This fixes the stale data beyond EOF problem that fsx quickly uncovers on 1k block size filesystems.
Signed-off-by: Dave Chinner dchinner@redhat.com Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/iomap.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/fs/iomap.c b/fs/iomap.c index e2e4a1154c90..cc080445b057 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -143,13 +143,14 @@ static void iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp) { + loff_t orig_pos = *pos; + loff_t isize = i_size_read(inode); unsigned block_bits = inode->i_blkbits; unsigned block_size = (1 << block_bits); unsigned poff = offset_in_page(*pos); unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length); unsigned first = poff >> block_bits; unsigned last = (poff + plen - 1) >> block_bits; - unsigned end = offset_in_page(i_size_read(inode)) >> block_bits;
/* * If the block size is smaller than the page size we need to check the @@ -184,8 +185,12 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, * handle both halves separately so that we properly zero data in the * page cache for blocks that are entirely outside of i_size. */ - if (first <= end && last > end) - plen -= (last - end) * block_size; + if (orig_pos <= isize && orig_pos + length > isize) { + unsigned end = offset_in_page(isize - 1) >> block_bits; + + if (first <= end && last > end) + plen -= (last - end) * block_size; + }
*offp = poff; *lenp = plen;
From: Vincent Chen vincentc@andestech.com
[ Upstream commit 426a593e641ebf0d9288f0a2fcab644a86820220 ]
In the original ftmac100_interrupt(), the interrupts are only disabled when the condition "netif_running(netdev)" is true. However, this condition causes kerenl hang in the following case. When the user requests to disable the network device, kernel will clear the bit __LINK_STATE_START from the dev->state and then call the driver's ndo_stop function. Network device interrupts are not blocked during this process. If an interrupt occurs between clearing __LINK_STATE_START and stopping network device, kernel cannot disable the interrupts due to the condition "netif_running(netdev)" in the ISR. Hence, kernel will hang due to the continuous interruption of the network device.
In order to solve the above problem, the interrupts of the network device should always be disabled in the ISR without being restricted by the condition "netif_running(netdev)".
[V2] Remove unnecessary curly braces.
Signed-off-by: Vincent Chen vincentc@andestech.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/faraday/ftmac100.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/faraday/ftmac100.c b/drivers/net/ethernet/faraday/ftmac100.c index a1197d3adbe0..9015bd911bee 100644 --- a/drivers/net/ethernet/faraday/ftmac100.c +++ b/drivers/net/ethernet/faraday/ftmac100.c @@ -872,11 +872,10 @@ static irqreturn_t ftmac100_interrupt(int irq, void *dev_id) struct net_device *netdev = dev_id; struct ftmac100 *priv = netdev_priv(netdev);
- if (likely(netif_running(netdev))) { - /* Disable interrupts for polling */ - ftmac100_disable_all_int(priv); + /* Disable interrupts for polling */ + ftmac100_disable_all_int(priv); + if (likely(netif_running(netdev))) napi_schedule(&priv->napi); - }
return IRQ_HANDLED; }
From: Pan Bian bianpan2016@163.com
[ Upstream commit 829383e183728dec7ed9150b949cd6de64127809 ]
memunmap() should be used to free the return of memremap(), not iounmap().
Fixes: dfddb969edf0 ('iommu/vt-d: Switch from ioremap_cache to memremap') Signed-off-by: Pan Bian bianpan2016@163.com Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/intel-iommu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index bedc801b06a0..a76c47f20587 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3100,7 +3100,7 @@ static int copy_context_table(struct intel_iommu *iommu, }
if (old_ce) - iounmap(old_ce); + memunmap(old_ce);
ret = 0; if (devfn < 0x80)
From: Olga Kornievskaia kolga@netapp.com
[ Upstream commit 99f2c55591fb5c1b536263970d98c2ebc2089906 ]
Bruce pointed out that we shouldn't allocate memory while holding a lock in the nfs4_callback_offload() and handle_async_copy() that deal with a racing CB_OFFLOAD and reply to COPY case.
Signed-off-by: Olga Kornievskaia kolga@netapp.com Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/callback_proc.c | 22 +++++++++++----------- fs/nfs/nfs42proc.c | 19 ++++++++++--------- 2 files changed, 21 insertions(+), 20 deletions(-)
diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c index fa515d5ea5ba..48b2e9063b0a 100644 --- a/fs/nfs/callback_proc.c +++ b/fs/nfs/callback_proc.c @@ -686,20 +686,24 @@ __be32 nfs4_callback_offload(void *data, void *dummy, { struct cb_offloadargs *args = data; struct nfs_server *server; - struct nfs4_copy_state *copy; + struct nfs4_copy_state *copy, *tmp_copy; bool found = false;
+ copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); + if (!copy) + return htonl(NFS4ERR_SERVERFAULT); + spin_lock(&cps->clp->cl_lock); rcu_read_lock(); list_for_each_entry_rcu(server, &cps->clp->cl_superblocks, client_link) { - list_for_each_entry(copy, &server->ss_copies, copies) { + list_for_each_entry(tmp_copy, &server->ss_copies, copies) { if (memcmp(args->coa_stateid.other, - copy->stateid.other, + tmp_copy->stateid.other, sizeof(args->coa_stateid.other))) continue; - nfs4_copy_cb_args(copy, args); - complete(©->completion); + nfs4_copy_cb_args(tmp_copy, args); + complete(&tmp_copy->completion); found = true; goto out; } @@ -707,15 +711,11 @@ __be32 nfs4_callback_offload(void *data, void *dummy, out: rcu_read_unlock(); if (!found) { - copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); - if (!copy) { - spin_unlock(&cps->clp->cl_lock); - return htonl(NFS4ERR_SERVERFAULT); - } memcpy(©->stateid, &args->coa_stateid, NFS4_STATEID_SIZE); nfs4_copy_cb_args(copy, args); list_add_tail(©->copies, &cps->clp->pending_cb_stateids); - } + } else + kfree(copy); spin_unlock(&cps->clp->cl_lock);
return 0; diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c index ac5b784a1de0..fed06fd9998d 100644 --- a/fs/nfs/nfs42proc.c +++ b/fs/nfs/nfs42proc.c @@ -137,31 +137,32 @@ static int handle_async_copy(struct nfs42_copy_res *res, struct file *dst, nfs4_stateid *src_stateid) { - struct nfs4_copy_state *copy; + struct nfs4_copy_state *copy, *tmp_copy; int status = NFS4_OK; bool found_pending = false; struct nfs_open_context *ctx = nfs_file_open_context(dst);
+ copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); + if (!copy) + return -ENOMEM; + spin_lock(&server->nfs_client->cl_lock); - list_for_each_entry(copy, &server->nfs_client->pending_cb_stateids, + list_for_each_entry(tmp_copy, &server->nfs_client->pending_cb_stateids, copies) { - if (memcmp(&res->write_res.stateid, ©->stateid, + if (memcmp(&res->write_res.stateid, &tmp_copy->stateid, NFS4_STATEID_SIZE)) continue; found_pending = true; - list_del(©->copies); + list_del(&tmp_copy->copies); break; } if (found_pending) { spin_unlock(&server->nfs_client->cl_lock); + kfree(copy); + copy = tmp_copy; goto out; }
- copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); - if (!copy) { - spin_unlock(&server->nfs_client->cl_lock); - return -ENOMEM; - } memcpy(©->stateid, &res->write_res.stateid, NFS4_STATEID_SIZE); init_completion(©->completion); copy->parent_state = ctx->state;
From: Tigran Mkrtchyan tigran.mkrtchyan@desy.de
[ Upstream commit bb21ce0ad227b69ec0f83279297ee44232105d96 ]
rfc8435 says:
For tight coupling, ffds_stateid provides the stateid to be used by the client to access the file.
However current implementation replaces per-mirror provided stateid with by open or lock stateid.
Ensure that per-mirror stateid is used by ff_layout_write_prepare_v4 and nfs4_ff_layout_prepare_ds.
Signed-off-by: Tigran Mkrtchyan tigran.mkrtchyan@desy.de Signed-off-by: Rick Macklem rmacklem@uoguelph.ca Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/flexfilelayout/flexfilelayout.c | 21 +++++++++------------ fs/nfs/flexfilelayout/flexfilelayout.h | 4 ++++ fs/nfs/flexfilelayout/flexfilelayoutdev.c | 19 +++++++++++++++++++ 3 files changed, 32 insertions(+), 12 deletions(-)
diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c index cae43333ef16..86ac2c5b93fe 100644 --- a/fs/nfs/flexfilelayout/flexfilelayout.c +++ b/fs/nfs/flexfilelayout/flexfilelayout.c @@ -1361,12 +1361,7 @@ static void ff_layout_read_prepare_v4(struct rpc_task *task, void *data) task)) return;
- if (ff_layout_read_prepare_common(task, hdr)) - return; - - if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context, - hdr->args.lock_context, FMODE_READ) == -EIO) - rpc_exit(task, -EIO); /* lost lock, terminate I/O */ + ff_layout_read_prepare_common(task, hdr); }
static void ff_layout_read_call_done(struct rpc_task *task, void *data) @@ -1542,12 +1537,7 @@ static void ff_layout_write_prepare_v4(struct rpc_task *task, void *data) task)) return;
- if (ff_layout_write_prepare_common(task, hdr)) - return; - - if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context, - hdr->args.lock_context, FMODE_WRITE) == -EIO) - rpc_exit(task, -EIO); /* lost lock, terminate I/O */ + ff_layout_write_prepare_common(task, hdr); }
static void ff_layout_write_call_done(struct rpc_task *task, void *data) @@ -1742,6 +1732,10 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr) fh = nfs4_ff_layout_select_ds_fh(lseg, idx); if (fh) hdr->args.fh = fh; + + if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) + goto out_failed; + /* * Note that if we ever decide to split across DSes, * then we may need to handle dense-like offsets. @@ -1804,6 +1798,9 @@ ff_layout_write_pagelist(struct nfs_pgio_header *hdr, int sync) if (fh) hdr->args.fh = fh;
+ if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) + goto out_failed; + /* * Note that if we ever decide to split across DSes, * then we may need to handle dense-like offsets. diff --git a/fs/nfs/flexfilelayout/flexfilelayout.h b/fs/nfs/flexfilelayout/flexfilelayout.h index 411798346e48..de50a342d5a5 100644 --- a/fs/nfs/flexfilelayout/flexfilelayout.h +++ b/fs/nfs/flexfilelayout/flexfilelayout.h @@ -215,6 +215,10 @@ unsigned int ff_layout_fetch_ds_ioerr(struct pnfs_layout_hdr *lo, unsigned int maxnum); struct nfs_fh * nfs4_ff_layout_select_ds_fh(struct pnfs_layout_segment *lseg, u32 mirror_idx); +int +nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg, + u32 mirror_idx, + nfs4_stateid *stateid);
struct nfs4_pnfs_ds * nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg, u32 ds_idx, diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c index 59aa04976331..a8df2f496898 100644 --- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c +++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c @@ -370,6 +370,25 @@ nfs4_ff_layout_select_ds_fh(struct pnfs_layout_segment *lseg, u32 mirror_idx) return fh; }
+int +nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg, + u32 mirror_idx, + nfs4_stateid *stateid) +{ + struct nfs4_ff_layout_mirror *mirror = FF_LAYOUT_COMP(lseg, mirror_idx); + + if (!ff_layout_mirror_valid(lseg, mirror, false)) { + pr_err_ratelimited("NFS: %s: No data server for mirror offset index %d\n", + __func__, mirror_idx); + goto out; + } + + nfs4_stateid_copy(stateid, &mirror->stateid); + return 1; +out: + return 0; +} + /** * nfs4_ff_layout_prepare_ds - prepare a DS connection for an RPC call * @lseg: the layout segment we're operating on
From: Tal Gilboa talgi@mellanox.com
[ Upstream commit 0211dda68a4f6531923a2f72d8e8959207f59fba ]
On every iteration of net_dim, the algorithm may choose to check for the system state by comparing current data sample with previous data sample. After each of these comparison, regardless of the action taken, the sample used as baseline is needed to be updated.
This patch fixes a bug that causes DIM to take wrong decisions, due to never updating the baseline sample for comparison between iterations. This way, DIM always compares current sample with zeros.
Although this is a functional fix, it also improves and stabilizes performance as the algorithm works properly now.
Performance: Tested single UDP TX stream with pktgen: samples/pktgen/pktgen_sample03_burst_single_flow.sh -i p4p2 -d 1.1.1.1 -m 24:8a:07:88:26:8b -f 3 -b 128
ConnectX-5 100GbE packet rate improved from 15-19Mpps to 19-20Mpps. Also, toggling between profiles is less frequent with the fix.
Fixes: 8115b750dbcb ("net/dim: use struct net_dim_sample as arg to net_dim") Signed-off-by: Tal Gilboa talgi@mellanox.com Reviewed-by: Tariq Toukan tariqt@mellanox.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/net_dim.h | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/include/linux/net_dim.h b/include/linux/net_dim.h index c79e859408e6..fd458389f7d1 100644 --- a/include/linux/net_dim.h +++ b/include/linux/net_dim.h @@ -406,6 +406,8 @@ static inline void net_dim(struct net_dim *dim, } /* fall through */ case NET_DIM_START_MEASURE: + net_dim_sample(end_sample.event_ctr, end_sample.pkt_ctr, end_sample.byte_ctr, + &dim->start_sample); dim->state = NET_DIM_MEASURE_IN_PROGRESS; break; case NET_DIM_APPLY_NEW_PROFILE:
From: Lorenzo Bianconi lorenzo.bianconi@redhat.com
[ Upstream commit 6d0f60b0f8588fd4380ea5df9601e12fddd55ce2 ]
Set xdp_prog pointer to NULL if bpf_prog_add fails since that routine reports the error code instead of NULL in case of failure and xdp_prog pointer value is used in the driver to verify if XDP is currently enabled. Moreover report the error code to userspace if nicvf_xdp_setup fails
Fixes: 05c773f52b96 ("net: thunderx: Add basic XDP support") Signed-off-by: Lorenzo Bianconi lorenzo.bianconi@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/cavium/thunder/nicvf_main.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c index 768f584f8392..88f8a8fa93cd 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c +++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c @@ -1784,6 +1784,7 @@ static int nicvf_xdp_setup(struct nicvf *nic, struct bpf_prog *prog) bool if_up = netif_running(nic->netdev); struct bpf_prog *old_prog; bool bpf_attached = false; + int ret = 0;
/* For now just support only the usual MTU sized frames */ if (prog && (dev->mtu > 1500)) { @@ -1817,8 +1818,12 @@ static int nicvf_xdp_setup(struct nicvf *nic, struct bpf_prog *prog) if (nic->xdp_prog) { /* Attach BPF program */ nic->xdp_prog = bpf_prog_add(nic->xdp_prog, nic->rx_queues - 1); - if (!IS_ERR(nic->xdp_prog)) + if (!IS_ERR(nic->xdp_prog)) { bpf_attached = true; + } else { + ret = PTR_ERR(nic->xdp_prog); + nic->xdp_prog = NULL; + } }
/* Calculate Tx queues needed for XDP and network stack */ @@ -1830,7 +1835,7 @@ static int nicvf_xdp_setup(struct nicvf *nic, struct bpf_prog *prog) netif_trans_update(nic->netdev); }
- return 0; + return ret; }
static int nicvf_xdp(struct net_device *netdev, struct netdev_bpf *xdp)
From: Thomas Falcon tlfalcon@linux.ibm.com
[ Upstream commit b7cdec3d699db2e5985ad39de0f25d3b6111928e ]
The wrong index is used when cleaning up RX buffer objects during release of RX queues. Update to use the correct index counter.
Signed-off-by: Thomas Falcon tlfalcon@linux.ibm.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ibm/ibmvnic.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index a646de07cbdc..f1d4d7a1278b 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -485,8 +485,8 @@ static void release_rx_pools(struct ibmvnic_adapter *adapter)
for (j = 0; j < rx_pool->size; j++) { if (rx_pool->rx_buff[j].skb) { - dev_kfree_skb_any(rx_pool->rx_buff[i].skb); - rx_pool->rx_buff[i].skb = NULL; + dev_kfree_skb_any(rx_pool->rx_buff[j].skb); + rx_pool->rx_buff[j].skb = NULL; } }
From: Thomas Falcon tlfalcon@linux.ibm.com
[ Upstream commit 5bf032ef08e6a110edc1e3bfb3c66a208fb55125 ]
During device reset, queue memory is not being updated to accommodate changes in ring buffer sizes supported by backing hardware. Track any differences in ring buffer sizes following the reset and update queue memory when possible.
Signed-off-by: Thomas Falcon tlfalcon@linux.ibm.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ibm/ibmvnic.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index f1d4d7a1278b..5ab21a1b5444 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -1737,6 +1737,7 @@ static int do_reset(struct ibmvnic_adapter *adapter, struct ibmvnic_rwi *rwi, u32 reset_state) { u64 old_num_rx_queues, old_num_tx_queues; + u64 old_num_rx_slots, old_num_tx_slots; struct net_device *netdev = adapter->netdev; int i, rc;
@@ -1748,6 +1749,8 @@ static int do_reset(struct ibmvnic_adapter *adapter,
old_num_rx_queues = adapter->req_rx_queues; old_num_tx_queues = adapter->req_tx_queues; + old_num_rx_slots = adapter->req_rx_add_entries_per_subcrq; + old_num_tx_slots = adapter->req_tx_entries_per_subcrq;
ibmvnic_cleanup(netdev);
@@ -1810,7 +1813,11 @@ static int do_reset(struct ibmvnic_adapter *adapter, if (rc) return rc; } else if (adapter->req_rx_queues != old_num_rx_queues || - adapter->req_tx_queues != old_num_tx_queues) { + adapter->req_tx_queues != old_num_tx_queues || + adapter->req_rx_add_entries_per_subcrq != + old_num_rx_slots || + adapter->req_tx_entries_per_subcrq != + old_num_tx_slots) { release_rx_pools(adapter); release_tx_pools(adapter); release_napi(adapter);
From: Jason Wang jasowang@redhat.com
[ Upstream commit e59ff2c49ae16e1d179de679aca81405829aee6c ]
We don't disable VIRTIO_NET_F_GUEST_CSUM if XDP was set. This means we can receive partial csumed packets with metadata kept in the vnet_hdr. This may have several side effects:
- It could be overridden by header adjustment, thus is might be not correct after XDP processing. - There's no way to pass such metadata information through XDP_REDIRECT to another driver. - XDP does not support checksum offload right now.
So simply disable guest csum if possible in this the case of XDP.
Fixes: 3f93522ffab2d ("virtio-net: switch off offloads on demand if possible on XDP set") Reported-by: Jesper Dangaard Brouer brouer@redhat.com Cc: Jesper Dangaard Brouer brouer@redhat.com Cc: Pavel Popa pashinho1990@gmail.com Cc: David Ahern dsahern@gmail.com Signed-off-by: Jason Wang jasowang@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/virtio_net.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index ddfa3f24204c..9cc51517e16f 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -70,7 +70,8 @@ static const unsigned long guest_offloads[] = { VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6, VIRTIO_NET_F_GUEST_ECN, - VIRTIO_NET_F_GUEST_UFO + VIRTIO_NET_F_GUEST_UFO, + VIRTIO_NET_F_GUEST_CSUM };
struct virtnet_stat_desc { @@ -2285,9 +2286,6 @@ static int virtnet_clear_guest_offloads(struct virtnet_info *vi) if (!vi->guest_offloads) return 0;
- if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM)) - offloads = 1ULL << VIRTIO_NET_F_GUEST_CSUM; - return virtnet_set_guest_offloads(vi, offloads); }
@@ -2297,8 +2295,6 @@ static int virtnet_restore_guest_offloads(struct virtnet_info *vi)
if (!vi->guest_offloads) return 0; - if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM)) - offloads |= 1ULL << VIRTIO_NET_F_GUEST_CSUM;
return virtnet_set_guest_offloads(vi, offloads); }
From: Jason Wang jasowang@redhat.com
[ Upstream commit 18ba58e1c234ea1a2d9835ac8c1735d965ce4640 ]
We don't support partial csumed packet since its metadata will be lost or incorrect during XDP processing. So fail the XDP set if guest_csum feature is negotiated.
Fixes: f600b6905015 ("virtio_net: Add XDP support") Reported-by: Jesper Dangaard Brouer brouer@redhat.com Cc: Jesper Dangaard Brouer brouer@redhat.com Cc: Pavel Popa pashinho1990@gmail.com Cc: David Ahern dsahern@gmail.com Signed-off-by: Jason Wang jasowang@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/virtio_net.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 9cc51517e16f..c2ca6cd3fbe0 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2312,8 +2312,9 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog, && (virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO4) || virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_TSO6) || virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_ECN) || - virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO))) { - NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing LRO, disable LRO first"); + virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO) || + virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM))) { + NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing LRO/CSUM, disable LRO/CSUM first"); return -EOPNOTSUPP; }
From: Hangbin Liu liuhangbin@gmail.com
[ Upstream commit 5ed9dc99107144f83b6c1bb52a69b58875baf540 ]
team_notify_peers() will send ARP and NA to notify peers. team_mcast_rejoin() will send multicast join group message to notify peers. We should do this when enabling/changed to a new port. But it doesn't make sense to do it when a port is disabled.
On the other hand, when we set mcast_rejoin_count to 2, and do a failover, team_port_disable() will increase mcast_rejoin.count_pending to 2 and then team_port_enable() will increase mcast_rejoin.count_pending to 4. We will send 4 mcast rejoin messages at latest, which will make user confused. The same with notify_peers.count.
Fix it by deleting team_notify_peers() and team_mcast_rejoin() in team_port_disable().
Reported-by: Liang Li liali@redhat.com Fixes: fc423ff00df3a ("team: add peer notification") Fixes: 492b200efdd20 ("team: add support for sending multicast rejoins") Signed-off-by: Hangbin Liu liuhangbin@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/team/team.c | 2 -- 1 file changed, 2 deletions(-)
diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c index d887016e54b6..4b6572f0188a 100644 --- a/drivers/net/team/team.c +++ b/drivers/net/team/team.c @@ -985,8 +985,6 @@ static void team_port_disable(struct team *team, team->en_port_count--; team_queue_override_port_del(team, port); team_adjust_ops(team); - team_notify_peers(team); - team_mcast_rejoin(team); team_lower_state_changed(port); }
From: Yangtao Li tiny.windzz@gmail.com
[ Upstream commit c44c749d3b6fdfca39002e7e48e03fe9f9fe37a3 ]
of_find_node_by_path() acquires a reference to the node returned by it and that reference needs to be dropped by its caller. This place doesn't do that, so fix it.
Signed-off-by: Yangtao Li tiny.windzz@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/amd/sunlance.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/amd/sunlance.c b/drivers/net/ethernet/amd/sunlance.c index cdd7a611479b..19f89d9b1781 100644 --- a/drivers/net/ethernet/amd/sunlance.c +++ b/drivers/net/ethernet/amd/sunlance.c @@ -1419,7 +1419,7 @@ static int sparc_lance_probe_one(struct platform_device *op,
prop = of_get_property(nd, "tpe-link-test?", NULL); if (!prop) - goto no_link_test; + goto node_put;
if (strcmp(prop, "true")) { printk(KERN_NOTICE "SunLance: warning: overriding option " @@ -1428,6 +1428,8 @@ static int sparc_lance_probe_one(struct platform_device *op, "to ecd@skynet.be\n"); auxio_set_lte(AUXIO_LTE_ON); } +node_put: + of_node_put(nd); no_link_test: lp->auto_select = 1; lp->tpe = 0;
From: Lorenzo Bianconi lorenzo.bianconi@redhat.com
[ Upstream commit ef2a7cf1d8831535b8991459567b385661eb4a36 ]
Reset snd_queue tso_hdrs pointer to NULL in nicvf_free_snd_queue routine since it is used to check if tso dma descriptor queue has been previously allocated. The issue can be triggered with the following reproducer:
$ip link set dev enP2p1s0v0 xdpdrv obj xdp_dummy.o $ip link set dev enP2p1s0v0 xdpdrv off
[ 341.467649] WARNING: CPU: 74 PID: 2158 at mm/vmalloc.c:1511 __vunmap+0x98/0xe0 [ 341.515010] Hardware name: GIGABYTE H270-T70/MT70-HD0, BIOS T49 02/02/2018 [ 341.521874] pstate: 60400005 (nZCv daif +PAN -UAO) [ 341.526654] pc : __vunmap+0x98/0xe0 [ 341.530132] lr : __vunmap+0x98/0xe0 [ 341.533609] sp : ffff00001c5db860 [ 341.536913] x29: ffff00001c5db860 x28: 0000000000020000 [ 341.542214] x27: ffff810feb5090b0 x26: ffff000017e57000 [ 341.547515] x25: 0000000000000000 x24: 00000000fbd00000 [ 341.552816] x23: 0000000000000000 x22: ffff810feb5090b0 [ 341.558117] x21: 0000000000000000 x20: 0000000000000000 [ 341.563418] x19: ffff000017e57000 x18: 0000000000000000 [ 341.568719] x17: 0000000000000000 x16: 0000000000000000 [ 341.574020] x15: 0000000000000010 x14: ffffffffffffffff [ 341.579321] x13: ffff00008985eb27 x12: ffff00000985eb2f [ 341.584622] x11: ffff0000096b3000 x10: ffff00001c5db510 [ 341.589923] x9 : 00000000ffffffd0 x8 : ffff0000086868e8 [ 341.595224] x7 : 3430303030303030 x6 : 00000000000006ef [ 341.600525] x5 : 00000000003fffff x4 : 0000000000000000 [ 341.605825] x3 : 0000000000000000 x2 : ffffffffffffffff [ 341.611126] x1 : ffff0000096b3728 x0 : 0000000000000038 [ 341.616428] Call trace: [ 341.618866] __vunmap+0x98/0xe0 [ 341.621997] vunmap+0x3c/0x50 [ 341.624961] arch_dma_free+0x68/0xa0 [ 341.628534] dma_direct_free+0x50/0x80 [ 341.632285] nicvf_free_resources+0x160/0x2d8 [nicvf] [ 341.637327] nicvf_config_data_transfer+0x174/0x5e8 [nicvf] [ 341.642890] nicvf_stop+0x298/0x340 [nicvf] [ 341.647066] __dev_close_many+0x9c/0x108 [ 341.650977] dev_close_many+0xa4/0x158 [ 341.654720] rollback_registered_many+0x140/0x530 [ 341.659414] rollback_registered+0x54/0x80 [ 341.663499] unregister_netdevice_queue+0x9c/0xe8 [ 341.668192] unregister_netdev+0x28/0x38 [ 341.672106] nicvf_remove+0xa4/0xa8 [nicvf] [ 341.676280] nicvf_shutdown+0x20/0x30 [nicvf] [ 341.680630] pci_device_shutdown+0x44/0x88 [ 341.684720] device_shutdown+0x144/0x250 [ 341.688640] kernel_restart_prepare+0x44/0x50 [ 341.692986] kernel_restart+0x20/0x68 [ 341.696638] __se_sys_reboot+0x210/0x238 [ 341.700550] __arm64_sys_reboot+0x24/0x30 [ 341.704555] el0_svc_handler+0x94/0x110 [ 341.708382] el0_svc+0x8/0xc [ 341.711252] ---[ end trace 3f4019c8439959c9 ]--- [ 341.715874] page:ffff7e0003ef4000 count:0 mapcount:0 mapping:0000000000000000 index:0x4 [ 341.723872] flags: 0x1fffe000000000() [ 341.727527] raw: 001fffe000000000 ffff7e0003f1a008 ffff7e0003ef4048 0000000000000000 [ 341.735263] raw: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000 [ 341.742994] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
where xdp_dummy.c is a simple bpf program that forwards the incoming frames to the network stack (available here: https://github.com/altoor/xdp_walkthrough_examples/blob/master/sample_1/xdp_...)
Fixes: 05c773f52b96 ("net: thunderx: Add basic XDP support") Fixes: 4863dea3fab0 ("net: Adding support for Cavium ThunderX network controller") Signed-off-by: Lorenzo Bianconi lorenzo.bianconi@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/cavium/thunder/nicvf_queues.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c index 187a249ff2d1..fcaf18fa3904 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c +++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c @@ -585,10 +585,12 @@ static void nicvf_free_snd_queue(struct nicvf *nic, struct snd_queue *sq) if (!sq->dmem.base) return;
- if (sq->tso_hdrs) + if (sq->tso_hdrs) { dma_free_coherent(&nic->pdev->dev, sq->dmem.q_len * TSO_HEADER_SIZE, sq->tso_hdrs, sq->tso_hdrs_phys); + sq->tso_hdrs = NULL; + }
/* Free pending skbs in the queue */ smp_rmb();
From: Andreas Fiedler andreas.fiedler@gmx.net
[ Upstream commit 07093b76476903f820d83d56c3040e656fb4d9e3 ]
The TX stats should be started with the tx_stats_syncp, there seems to be a copy/paste error in the driver.
Signed-off-by: Andreas Fiedler andreas.fiedler@gmx.net Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/cortina/gemini.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c index 1c9ad3630c77..dfd1ad0b1cb9 100644 --- a/drivers/net/ethernet/cortina/gemini.c +++ b/drivers/net/ethernet/cortina/gemini.c @@ -661,7 +661,7 @@ static void gmac_clean_txq(struct net_device *netdev, struct gmac_txq *txq,
u64_stats_update_begin(&port->tx_stats_syncp); port->tx_frag_stats[nfrags]++; - u64_stats_update_end(&port->ir_stats_syncp); + u64_stats_update_end(&port->tx_stats_syncp); } }
linux-stable-mirror@lists.linaro.org