From: Ondrej Jirman megous@megous.com
[ Upstream commit a40550952c000667b20082d58077bc647da6c890 ]
Lowering the voltage solves the quick image degradation over time (minutes), that was probably caused by overheating.
Signed-off-by: Ondrej Jirman megous@megous.com Signed-off-by: Maxime Ripard maxime@cerno.tech Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts b/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts index 397140454132f..6bf93e5ed6817 100644 --- a/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts +++ b/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts @@ -358,8 +358,8 @@ };
®_dldo3 { - regulator-min-microvolt = <2800000>; - regulator-max-microvolt = <2800000>; + regulator-min-microvolt = <1800000>; + regulator-max-microvolt = <1800000>; regulator-name = "vdd-csi"; };
From: Ondrej Jirman megous@megous.com
[ Upstream commit a43ab30dcd4a1abcdd0d2461bf1cf7c0817f6cd3 ]
When doing a 16-bit read that returns data in the MSB byte, the RSB_DATA register will keep the MSB byte unchanged when doing the following 8-bit read. sunxi_rsb_read() will then return a result that contains high byte from 16-bit read mixed with the 8-bit result.
The consequence is that after this happens the PMIC's regmap will look like this: (0x33 is the high byte from the 16-bit read)
% cat /sys/kernel/debug/regmap/sunxi-rsb-3a3/registers 00: 33 01: 33 02: 33 03: 33 04: 33 05: 33 06: 33 07: 33 08: 33 09: 33 0a: 33 0b: 33 0c: 33 0d: 33 0e: 33 [snip]
Fix this by masking the result of the read with the correct mask based on the size of the read. There are no 16-bit users in the mainline kernel, so this doesn't need to get into the stable tree.
Signed-off-by: Ondrej Jirman megous@megous.com Acked-by: Chen-Yu Tsai wens@csie.org Signed-off-by: Maxime Ripard maxime@cerno.tech Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/bus/sunxi-rsb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/bus/sunxi-rsb.c b/drivers/bus/sunxi-rsb.c index be79d6c6a4e45..1bb00a959c67f 100644 --- a/drivers/bus/sunxi-rsb.c +++ b/drivers/bus/sunxi-rsb.c @@ -345,7 +345,7 @@ static int sunxi_rsb_read(struct sunxi_rsb *rsb, u8 rtaddr, u8 addr, if (ret) goto unlock;
- *buf = readl(rsb->regs + RSB_DATA); + *buf = readl(rsb->regs + RSB_DATA) & GENMASK(len * 8 - 1, 0);
unlock: mutex_unlock(&rsb->lock);
From: Tony Lindgren tony@atomide.com
[ Upstream commit b46b2b7ba6e104d265ab705914859ec0db7a98c5 ]
Commit cd28d1d6e52e ("net: phy: at803x: Disable phy delay for RGMII mode") caused a regression for dm814x boards where NFSroot would no longer work.
Let's fix the issue by configuring "rgmii-id" mode as internal delays are needed that is no longer the case with "rgmii" mode.
Signed-off-by: Tony Lindgren tony@atomide.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/dm8148-evm.dts | 4 ++-- arch/arm/boot/dts/dm8148-t410.dts | 4 ++-- arch/arm/boot/dts/dra62x-j5eco-evm.dts | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm/boot/dts/dm8148-evm.dts b/arch/arm/boot/dts/dm8148-evm.dts index 3931fb068ff09..91d1018ab75fc 100644 --- a/arch/arm/boot/dts/dm8148-evm.dts +++ b/arch/arm/boot/dts/dm8148-evm.dts @@ -24,12 +24,12 @@
&cpsw_emac0 { phy-handle = <ðphy0>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&cpsw_emac1 { phy-handle = <ðphy1>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&davinci_mdio { diff --git a/arch/arm/boot/dts/dm8148-t410.dts b/arch/arm/boot/dts/dm8148-t410.dts index 9e43d5ec0bb2f..79ccdd4470f4c 100644 --- a/arch/arm/boot/dts/dm8148-t410.dts +++ b/arch/arm/boot/dts/dm8148-t410.dts @@ -33,12 +33,12 @@
&cpsw_emac0 { phy-handle = <ðphy0>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&cpsw_emac1 { phy-handle = <ðphy1>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&davinci_mdio { diff --git a/arch/arm/boot/dts/dra62x-j5eco-evm.dts b/arch/arm/boot/dts/dra62x-j5eco-evm.dts index 861ab90a3f3aa..c16e183822bee 100644 --- a/arch/arm/boot/dts/dra62x-j5eco-evm.dts +++ b/arch/arm/boot/dts/dra62x-j5eco-evm.dts @@ -24,12 +24,12 @@
&cpsw_emac0 { phy-handle = <ðphy0>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&cpsw_emac1 { phy-handle = <ðphy1>; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; };
&davinci_mdio {
From: Yonghong Song yhs@fb.com
[ Upstream commit 1bc7896e9ef44fd77858b3ef0b8a6840be3a4494 ]
When experimenting with bpf_send_signal() helper in our production environment (5.2 based), we experienced a deadlock in NMI mode: #5 [ffffc9002219f770] queued_spin_lock_slowpath at ffffffff8110be24 #6 [ffffc9002219f770] _raw_spin_lock_irqsave at ffffffff81a43012 #7 [ffffc9002219f780] try_to_wake_up at ffffffff810e7ecd #8 [ffffc9002219f7e0] signal_wake_up_state at ffffffff810c7b55 #9 [ffffc9002219f7f0] __send_signal at ffffffff810c8602 #10 [ffffc9002219f830] do_send_sig_info at ffffffff810ca31a #11 [ffffc9002219f868] bpf_send_signal at ffffffff8119d227 #12 [ffffc9002219f988] bpf_overflow_handler at ffffffff811d4140 #13 [ffffc9002219f9e0] __perf_event_overflow at ffffffff811d68cf #14 [ffffc9002219fa10] perf_swevent_overflow at ffffffff811d6a09 #15 [ffffc9002219fa38] ___perf_sw_event at ffffffff811e0f47 #16 [ffffc9002219fc30] __schedule at ffffffff81a3e04d #17 [ffffc9002219fc90] schedule at ffffffff81a3e219 #18 [ffffc9002219fca0] futex_wait_queue_me at ffffffff8113d1b9 #19 [ffffc9002219fcd8] futex_wait at ffffffff8113e529 #20 [ffffc9002219fdf0] do_futex at ffffffff8113ffbc #21 [ffffc9002219fec0] __x64_sys_futex at ffffffff81140d1c #22 [ffffc9002219ff38] do_syscall_64 at ffffffff81002602 #23 [ffffc9002219ff50] entry_SYSCALL_64_after_hwframe at ffffffff81c00068
The above call stack is actually very similar to an issue reported by Commit eac9153f2b58 ("bpf/stackmap: Fix deadlock with rq_lock in bpf_get_stack()") by Song Liu. The only difference is bpf_send_signal() helper instead of bpf_get_stack() helper.
The above deadlock is triggered with a perf_sw_event. Similar to Commit eac9153f2b58, the below almost identical reproducer used tracepoint point sched/sched_switch so the issue can be easily caught. /* stress_test.c */ #include <stdio.h> #include <stdlib.h> #include <sys/mman.h> #include <pthread.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h>
#define THREAD_COUNT 1000 char *filename; void *worker(void *p) { void *ptr; int fd; char *pptr;
fd = open(filename, O_RDONLY); if (fd < 0) return NULL; while (1) { struct timespec ts = {0, 1000 + rand() % 2000};
ptr = mmap(NULL, 4096 * 64, PROT_READ, MAP_PRIVATE, fd, 0); usleep(1); if (ptr == MAP_FAILED) { printf("failed to mmap\n"); break; } munmap(ptr, 4096 * 64); usleep(1); pptr = malloc(1); usleep(1); pptr[0] = 1; usleep(1); free(pptr); usleep(1); nanosleep(&ts, NULL); } close(fd); return NULL; }
int main(int argc, char *argv[]) { void *ptr; int i; pthread_t threads[THREAD_COUNT];
if (argc < 2) return 0;
filename = argv[1];
for (i = 0; i < THREAD_COUNT; i++) { if (pthread_create(threads + i, NULL, worker, NULL)) { fprintf(stderr, "Error creating thread\n"); return 0; } }
for (i = 0; i < THREAD_COUNT; i++) pthread_join(threads[i], NULL); return 0; } and the following command: 1. run `stress_test /bin/ls` in one windown 2. hack bcc trace.py with the following change: --- a/tools/trace.py +++ b/tools/trace.py @@ -513,6 +513,7 @@ BPF_PERF_OUTPUT(%s); __data.tgid = __tgid; __data.pid = __pid; bpf_get_current_comm(&__data.comm, sizeof(__data.comm)); + bpf_send_signal(10); %s %s %s.perf_submit(%s, &__data, sizeof(__data)); 3. in a different window run ./trace.py -p $(pidof stress_test) t:sched:sched_switch
The deadlock can be reproduced in our production system.
Similar to Song's fix, the fix is to delay sending signal if irqs is disabled to avoid deadlocks involving with rq_lock. With this change, my above stress-test in our production system won't cause deadlock any more.
I also implemented a scale-down version of reproducer in the selftest (a subsequent commit). With latest bpf-next, it complains for the following potential deadlock. [ 32.832450] -> #1 (&p->pi_lock){-.-.}: [ 32.833100] _raw_spin_lock_irqsave+0x44/0x80 [ 32.833696] task_rq_lock+0x2c/0xa0 [ 32.834182] task_sched_runtime+0x59/0xd0 [ 32.834721] thread_group_cputime+0x250/0x270 [ 32.835304] thread_group_cputime_adjusted+0x2e/0x70 [ 32.835959] do_task_stat+0x8a7/0xb80 [ 32.836461] proc_single_show+0x51/0xb0 ... [ 32.839512] -> #0 (&(&sighand->siglock)->rlock){....}: [ 32.840275] __lock_acquire+0x1358/0x1a20 [ 32.840826] lock_acquire+0xc7/0x1d0 [ 32.841309] _raw_spin_lock_irqsave+0x44/0x80 [ 32.841916] __lock_task_sighand+0x79/0x160 [ 32.842465] do_send_sig_info+0x35/0x90 [ 32.842977] bpf_send_signal+0xa/0x10 [ 32.843464] bpf_prog_bc13ed9e4d3163e3_send_signal_tp_sched+0x465/0x1000 [ 32.844301] trace_call_bpf+0x115/0x270 [ 32.844809] perf_trace_run_bpf_submit+0x4a/0xc0 [ 32.845411] perf_trace_sched_switch+0x10f/0x180 [ 32.846014] __schedule+0x45d/0x880 [ 32.846483] schedule+0x5f/0xd0 ...
[ 32.853148] Chain exists of: [ 32.853148] &(&sighand->siglock)->rlock --> &p->pi_lock --> &rq->lock [ 32.853148] [ 32.854451] Possible unsafe locking scenario: [ 32.854451] [ 32.855173] CPU0 CPU1 [ 32.855745] ---- ---- [ 32.856278] lock(&rq->lock); [ 32.856671] lock(&p->pi_lock); [ 32.857332] lock(&rq->lock); [ 32.857999] lock(&(&sighand->siglock)->rlock);
Deadlock happens on CPU0 when it tries to acquire &sighand->siglock but it has been held by CPU1 and CPU1 tries to grab &rq->lock and cannot get it.
This is not exactly the callstack in our production environment, but sympotom is similar and both locks are using spin_lock_irqsave() to acquire the lock, and both involves rq_lock. The fix to delay sending signal when irq is disabled also fixed this issue.
Signed-off-by: Yonghong Song yhs@fb.com Signed-off-by: Alexei Starovoitov ast@kernel.org Cc: Song Liu songliubraving@fb.com Link: https://lore.kernel.org/bpf/20200304191104.2796501-1-yhs@fb.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/trace/bpf_trace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 89bdac61233db..2372b861f2cfa 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -650,7 +650,7 @@ BPF_CALL_1(bpf_send_signal, u32, sig) if (unlikely(!nmi_uaccess_okay())) return -EPERM;
- if (in_nmi()) { + if (irqs_disabled()) { /* Do an early check on signal validity. Otherwise, * the error is lost in deferred irq_work. */
From: Tariq Toukan tariqt@mellanox.com
[ Upstream commit f28ca65efa87b3fb8da3d69ca7cb1ebc0448de66 ]
Fix to match the HW spec: TRACKING state is 1, SEARCHING is 2. No real issue for now, as these values are not currently used.
Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support") Signed-off-by: Tariq Toukan tariqt@mellanox.com Reviewed-by: Boris Pismenny borisp@mellanox.com Signed-off-by: Saeed Mahameed saeedm@mellanox.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h index a3efa29a4629d..63116be6b1d60 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h @@ -38,8 +38,8 @@ enum {
enum { MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_START = 0, - MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_SEARCHING = 1, - MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_TRACKING = 2, + MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_TRACKING = 1, + MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_SEARCHING = 2, };
struct mlx5e_ktls_offload_context_tx {
From: Mordechay Goodstein mordechay.goodstein@intel.com
[ Upstream commit cb377dfda1755b3bc01436755d866c8e5336a762 ]
The AP may set the LDPC capability only in HE (IEEE80211_HE_PHY_CAP1), but we were checking it only in the HT capabilities.
If we don't use this capability when required, the DSP gets the wrong configuration in HE and doesn't work properly.
Signed-off-by: Mordechay Goodstein mordechay.goodstein@intel.com Fixes: befebbb30af0 ("iwlwifi: rs: consider LDPC capability in case of HE") Signed-off-by: Luca Coelho luciano.coelho@intel.com Link: https://lore.kernel.org/r/iwlwifi.20200306151128.492d167c1a25.I1ad1353dbbf6c... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c index 098d48153a387..24df3182ec9eb 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c @@ -147,7 +147,11 @@ static u16 rs_fw_get_config_flags(struct iwl_mvm *mvm, (vht_ena && (vht_cap->cap & IEEE80211_VHT_CAP_RXLDPC)))) flags |= IWL_TLC_MNG_CFG_FLAGS_LDPC_MSK;
- /* consider our LDPC support in case of HE */ + /* consider LDPC support in case of HE */ + if (he_cap->has_he && (he_cap->he_cap_elem.phy_cap_info[1] & + IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD)) + flags |= IWL_TLC_MNG_CFG_FLAGS_LDPC_MSK; + if (sband->iftype_data && sband->iftype_data->he_cap.has_he && !(sband->iftype_data->he_cap.he_cap_elem.phy_cap_info[1] & IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD))
From: Mordechay Goodstein mordechay.goodstein@intel.com
[ Upstream commit a5688e600e78f9fc68102bf0fe5c797fc2826abe ]
The TLV offset is only used to read registers, while the offset used for the FIFO addresses are hard coded in the driver and not given by the TLV.
If we try to apply the TLV offset when reading the FIFOs, we'll read from invalid addresses, causing the driver to hang.
Signed-off-by: Mordechay Goodstein mordechay.goodstein@intel.com Fixes: 8d7dea25ada7 ("iwlwifi: dbg_ini: implement Rx fifos dump") Signed-off-by: Luca Coelho luciano.coelho@intel.com Link: https://lore.kernel.org/r/iwlwifi.20200306151129.fbab869c26fa.I4ddac20d02f9b... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/fw/dbg.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c index 386ca67ec7b4e..ef65e1ea374fd 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c @@ -8,7 +8,7 @@ * Copyright(c) 2008 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH * Copyright(c) 2015 - 2017 Intel Deutschland GmbH - * Copyright(c) 2018 - 2019 Intel Corporation + * Copyright(c) 2018 - 2020 Intel Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of version 2 of the GNU General Public License as @@ -31,7 +31,7 @@ * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH * Copyright(c) 2015 - 2017 Intel Deutschland GmbH - * Copyright(c) 2018 - 2019 Intel Corporation + * Copyright(c) 2018 - 2020 Intel Corporation * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -1373,11 +1373,7 @@ static int iwl_dump_ini_rxf_iter(struct iwl_fw_runtime *fwrt, goto out; }
- /* - * region register have absolute value so apply rxf offset after - * reading the registers - */ - offs += rxf_data.offset; + offs = rxf_data.offset;
/* Lock fence */ iwl_write_prph_no_grab(fwrt->trans, RXF_SET_FENCE_MODE + offs, 0x1);
From: Luca Coelho luciano.coelho@intel.com
[ Upstream commit 699b760bd29edba736590fffef7654cb079c753e ]
If the firmware is in a bad state or not initialized fully, sending the DBGC_SUSPEND_RESUME command fails but we can still collect logs.
Instead of aborting the entire dump process, simply ignore the error. By removing the last callpoint that was checking the return value, we can also convert the function to return void.
Signed-off-by: Luca Coelho luciano.coelho@intel.com Fixes: 576058330f2d ("iwlwifi: dbg: support debug recording suspend resume command") Signed-off-by: Luca Coelho luciano.coelho@intel.com Link: https://lore.kernel.org/r/iwlwifi.20200306151129.dcec37b2efd4.I8dcd190431d11... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/fw/dbg.c | 15 +++++---------- drivers/net/wireless/intel/iwlwifi/fw/dbg.h | 6 +++--- 2 files changed, 8 insertions(+), 13 deletions(-)
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c index ef65e1ea374fd..cb5465d9c0686 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c @@ -2311,10 +2311,7 @@ static void iwl_fw_dbg_collect_sync(struct iwl_fw_runtime *fwrt, u8 wk_idx) goto out; }
- if (iwl_fw_dbg_stop_restart_recording(fwrt, ¶ms, true)) { - IWL_ERR(fwrt, "Failed to stop DBGC recording, aborting dump\n"); - goto out; - } + iwl_fw_dbg_stop_restart_recording(fwrt, ¶ms, true);
IWL_DEBUG_FW_INFO(fwrt, "WRT: Data collection start\n"); if (iwl_trans_dbg_ini_valid(fwrt->trans)) @@ -2480,14 +2477,14 @@ static int iwl_fw_dbg_restart_recording(struct iwl_trans *trans, return 0; }
-int iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt, - struct iwl_fw_dbg_params *params, - bool stop) +void iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt, + struct iwl_fw_dbg_params *params, + bool stop) { int ret = 0;
if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status)) - return 0; + return;
if (fw_has_capa(&fwrt->fw->ucode_capa, IWL_UCODE_TLV_CAPA_DBG_SUSPEND_RESUME_CMD_SUPP)) @@ -2504,7 +2501,5 @@ int iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt, iwl_fw_set_dbg_rec_on(fwrt); } #endif - - return ret; } IWL_EXPORT_SYMBOL(iwl_fw_dbg_stop_restart_recording); diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h index e3b5dd34643f5..2ac61629f46f4 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h @@ -263,9 +263,9 @@ _iwl_fw_dbg_trigger_simple_stop(struct iwl_fw_runtime *fwrt, _iwl_fw_dbg_trigger_simple_stop((fwrt), (wdev), \ iwl_fw_dbg_get_trigger((fwrt)->fw,\ (trig))) -int iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt, - struct iwl_fw_dbg_params *params, - bool stop); +void iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt, + struct iwl_fw_dbg_params *params, + bool stop);
#ifdef CONFIG_IWLWIFI_DEBUGFS static inline void iwl_fw_set_dbg_rec_on(struct iwl_fw_runtime *fwrt)
From: Ilan Peer ilan.peer@intel.com
[ Upstream commit ce19801ba75a902ab515dda03b57738c708d0781 ]
The TLC configuration did not take into consideration the station's SMPS configuration, and thus configured rates for 2 NSS even if static SMPS was reported by the station. Fix this.
Signed-off-by: Ilan Peer ilan.peer@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Link: https://lore.kernel.org/r/iwlwifi.20200306151129.b4f940d13eca.Ieebfa889d0820... Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/wireless/intel/iwlwifi/mvm/rs-fw.c | 29 ++++++++++++++----- 1 file changed, 21 insertions(+), 8 deletions(-)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c index 24df3182ec9eb..5b2bd603febfc 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c @@ -6,7 +6,7 @@ * GPL LICENSE SUMMARY * * Copyright(c) 2017 Intel Deutschland GmbH - * Copyright(c) 2018 - 2019 Intel Corporation + * Copyright(c) 2018 - 2020 Intel Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of version 2 of the GNU General Public License as @@ -27,7 +27,7 @@ * BSD LICENSE * * Copyright(c) 2017 Intel Deutschland GmbH - * Copyright(c) 2018 - 2019 Intel Corporation + * Copyright(c) 2018 - 2020 Intel Corporation * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -195,11 +195,13 @@ rs_fw_vht_set_enabled_rates(const struct ieee80211_sta *sta, { u16 supp; int i, highest_mcs; + u8 nss = sta->rx_nss;
- for (i = 0; i < sta->rx_nss; i++) { - if (i == IWL_TLC_NSS_MAX) - break; + /* the station support only a single receive chain */ + if (sta->smps_mode == IEEE80211_SMPS_STATIC) + nss = 1;
+ for (i = 0; i < nss && i < IWL_TLC_NSS_MAX; i++) { highest_mcs = rs_fw_vht_highest_rx_mcs_index(vht_cap, i + 1); if (!highest_mcs) continue; @@ -245,8 +247,13 @@ rs_fw_he_set_enabled_rates(const struct ieee80211_sta *sta, u16 tx_mcs_160 = le16_to_cpu(sband->iftype_data->he_cap.he_mcs_nss_supp.tx_mcs_160); int i; + u8 nss = sta->rx_nss;
- for (i = 0; i < sta->rx_nss && i < IWL_TLC_NSS_MAX; i++) { + /* the station support only a single receive chain */ + if (sta->smps_mode == IEEE80211_SMPS_STATIC) + nss = 1; + + for (i = 0; i < nss && i < IWL_TLC_NSS_MAX; i++) { u16 _mcs_160 = (mcs_160 >> (2 * i)) & 0x3; u16 _mcs_80 = (mcs_80 >> (2 * i)) & 0x3; u16 _tx_mcs_160 = (tx_mcs_160 >> (2 * i)) & 0x3; @@ -307,8 +314,14 @@ static void rs_fw_set_supp_rates(struct ieee80211_sta *sta, cmd->mode = IWL_TLC_MNG_MODE_HT; cmd->ht_rates[IWL_TLC_NSS_1][IWL_TLC_HT_BW_NONE_160] = cpu_to_le16(ht_cap->mcs.rx_mask[0]); - cmd->ht_rates[IWL_TLC_NSS_2][IWL_TLC_HT_BW_NONE_160] = - cpu_to_le16(ht_cap->mcs.rx_mask[1]); + + /* the station support only a single receive chain */ + if (sta->smps_mode == IEEE80211_SMPS_STATIC) + cmd->ht_rates[IWL_TLC_NSS_2][IWL_TLC_HT_BW_NONE_160] = + 0; + else + cmd->ht_rates[IWL_TLC_NSS_2][IWL_TLC_HT_BW_NONE_160] = + cpu_to_le16(ht_cap->mcs.rx_mask[1]); } }
From: Nick Reitemeyer nick.reitemeyer@web.de
[ Upstream commit da3289044833769188c0da945d2cec90af35e87e ]
The Coreriver TouchCore 360 is like the midas board touchkey, but it is using a fixed regulator.
Signed-off-by: Nick Reitemeyer nick.reitemeyer@web.de Link: https://lore.kernel.org/r/20200121141525.3404-3-nick.reitemeyer@web.de Signed-off-by: Dmitry Torokhov dmitry.torokhov@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/input/keyboard/tm2-touchkey.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/drivers/input/keyboard/tm2-touchkey.c b/drivers/input/keyboard/tm2-touchkey.c index 14b55bacdd0f1..fb078e049413f 100644 --- a/drivers/input/keyboard/tm2-touchkey.c +++ b/drivers/input/keyboard/tm2-touchkey.c @@ -75,6 +75,14 @@ static struct touchkey_variant aries_touchkey_variant = { .cmd_led_off = ARIES_TOUCHKEY_CMD_LED_OFF, };
+static const struct touchkey_variant tc360_touchkey_variant = { + .keycode_reg = 0x00, + .base_reg = 0x00, + .fixed_regulator = true, + .cmd_led_on = TM2_TOUCHKEY_CMD_LED_ON, + .cmd_led_off = TM2_TOUCHKEY_CMD_LED_OFF, +}; + static int tm2_touchkey_led_brightness_set(struct led_classdev *led_dev, enum led_brightness brightness) { @@ -327,6 +335,9 @@ static const struct of_device_id tm2_touchkey_of_match[] = { }, { .compatible = "cypress,aries-touchkey", .data = &aries_touchkey_variant, + }, { + .compatible = "coreriver,tc360-touchkey", + .data = &tc360_touchkey_variant, }, { }, };
From: Grigore Popescu grigore.popescu@nxp.com
[ Upstream commit fe8fe7723a3a824790bda681b40efd767e2251a7 ]
The dpio irqs must be registered when you can actually receive interrupts, ie when the dpios are created. Kernel goes through NULL pointer dereference errors followed by kernel panic [1] because the dpio irqs are enabled before the dpio is created.
[1] Unable to handle kernel NULL pointer dereference at virtual address 0040 fsl_mc_dpio dpio.14: probed fsl_mc_dpio dpio.13: Adding to iommu group 11 ISV = 0, ISS = 0x00000004 Unable to handle kernel NULL pointer dereference at virtual address 0040 Mem abort info: ESR = 0x96000004 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 Data abort info: ISV = 0, ISS = 0x00000004 CM = 0, WnR = 0 [0000000000000040] user address but active_mm is swapper Internal error: Oops: 96000004 [#1] PREEMPT SMP Modules linked in: CPU: 2 PID: 151 Comm: kworker/2:1 Not tainted 5.6.0-rc4-next-20200304 #1 Hardware name: NXP Layerscape LX2160ARDB (DT) Workqueue: events deferred_probe_work_func pstate: 00000085 (nzcv daIf -PAN -UAO) pc : dpaa2_io_irq+0x18/0xe0 lr : dpio_irq_handler+0x1c/0x28 sp : ffff800010013e20 x29: ffff800010013e20 x28: ffff0026d9b4c140 x27: ffffa1d38a142018 x26: ffff0026d2953400 x25: ffffa1d38a142018 x24: ffffa1d38a7ba1d8 x23: ffff800010013f24 x22: 0000000000000000 x21: 0000000000000072 x20: ffff0026d2953400 x19: ffff0026d2a68b80 x18: 0000000000000001 x17: 000000002fb37f3d x16: 0000000035eafadd x15: ffff0026d9b4c5b8 x14: ffffffffffffffff x13: ff00000000000000 x12: 0000000000000038 x11: 0101010101010101 x10: 0000000000000040 x9 : ffffa1d388db11e4 x8 : ffffa1d38a7e40f0 x7 : ffff0026da414f38 x6 : 0000000000000000 x5 : ffff0026da414d80 x4 : ffff5e5353d0c000 x3 : ffff800010013f60 x2 : ffffa1d388db11c8 x1 : ffff0026d2a67c00 x0 : 0000000000000000 Call trace: dpaa2_io_irq+0x18/0xe0 dpio_irq_handler+0x1c/0x28 __handle_irq_event_percpu+0x78/0x2c0 handle_irq_event_percpu+0x38/0x90 handle_irq_event+0x4c/0xd0 handle_fasteoi_irq+0xbc/0x168 generic_handle_irq+0x2c/0x40 __handle_domain_irq+0x68/0xc0 gic_handle_irq+0x64/0x150 el1_irq+0xb8/0x180 _raw_spin_unlock_irqrestore+0x14/0x48 irq_set_affinity_hint+0x6c/0xa0 dpaa2_dpio_probe+0x2a4/0x518 fsl_mc_driver_probe+0x28/0x70 really_probe+0xdc/0x320 driver_probe_device+0x5c/0xf0 __device_attach_driver+0x88/0xc0 bus_for_each_drv+0x7c/0xc8 __device_attach+0xe4/0x140 device_initial_probe+0x18/0x20 bus_probe_device+0x98/0xa0 device_add+0x41c/0x758 fsl_mc_device_add+0x184/0x530 dprc_scan_objects+0x280/0x370 dprc_probe+0x124/0x3b0 fsl_mc_driver_probe+0x28/0x70 really_probe+0xdc/0x320 driver_probe_device+0x5c/0xf0 __device_attach_driver+0x88/0xc0 bus_for_each_drv+0x7c/0xc8 __device_attach+0xe4/0x140 device_initial_probe+0x18/0x20 bus_probe_device+0x98/0xa0 deferred_probe_work_func+0x74/0xa8 process_one_work+0x1c8/0x470 worker_thread+0x1f8/0x428 kthread+0x124/0x128 ret_from_fork+0x10/0x18 Code: a9bc7bfd 910003fd a9025bf5 a90363f7 (f9402015) ---[ end trace 38298e1a29e7a570 ]--- Kernel panic - not syncing: Fatal exception in interrupt SMP: stopping secondary CPUs Mem abort info: ESR = 0x96000004 CM = 0, WnR = 0 EC = 0x25: DABT (current EL), IL = 32 bits [0000000000000040] user address but active_mm is swapper SET = 0, FnV = 0 EA = 0, S1PTW = 0 Data abort info: ISV = 0, ISS = 0x00000004 CM = 0, WnR = 0 [0000000000000040] user address but active_mm is swapper SMP: failed to stop secondary CPUs 0-2 Kernel Offset: 0x21d378600000 from 0xffff800010000000 PHYS_OFFSET: 0xffffe92180000000 CPU features: 0x10002,21806008 Memory Limit: none ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]---
Signed-off-by: Laurentiu Tudor laurentiu.tudor@nxp.com Signed-off-by: Grigore Popescu grigore.popescu@nxp.com Signed-off-by: Li Yang leoyang.li@nxp.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/fsl/dpio/dpio-driver.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/soc/fsl/dpio/dpio-driver.c b/drivers/soc/fsl/dpio/dpio-driver.c index 70014ecce2a7e..7b642c330977f 100644 --- a/drivers/soc/fsl/dpio/dpio-driver.c +++ b/drivers/soc/fsl/dpio/dpio-driver.c @@ -233,10 +233,6 @@ static int dpaa2_dpio_probe(struct fsl_mc_device *dpio_dev) goto err_allocate_irqs; }
- err = register_dpio_irq_handlers(dpio_dev, desc.cpu); - if (err) - goto err_register_dpio_irq; - priv->io = dpaa2_io_create(&desc, dev); if (!priv->io) { dev_err(dev, "dpaa2_io_create failed\n"); @@ -244,6 +240,10 @@ static int dpaa2_dpio_probe(struct fsl_mc_device *dpio_dev) goto err_dpaa2_io_create; }
+ err = register_dpio_irq_handlers(dpio_dev, desc.cpu); + if (err) + goto err_register_dpio_irq; + dev_info(dev, "probed\n"); dev_dbg(dev, " receives_notifications = %d\n", desc.receives_notifications);
From: David Howells dhowells@redhat.com
[ Upstream commit 158fe6665389964a1de212818b4a5c52b7f7aff4 ]
Abstract out the calculation of there being sufficient Tx buffer space. This is reproduced several times in the rxrpc sendmsg code.
Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/rxrpc/sendmsg.c | 27 ++++++++++++++++++--------- 1 file changed, 18 insertions(+), 9 deletions(-)
diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 813fd68881429..a13051d380973 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -17,6 +17,21 @@ #include <net/af_rxrpc.h> #include "ar-internal.h"
+/* + * Return true if there's sufficient Tx queue space. + */ +static bool rxrpc_check_tx_space(struct rxrpc_call *call, rxrpc_seq_t *_tx_win) +{ + unsigned int win_size = + min_t(unsigned int, call->tx_winsize, + call->cong_cwnd + call->cong_extra); + rxrpc_seq_t tx_win = READ_ONCE(call->tx_hard_ack); + + if (_tx_win) + *_tx_win = tx_win; + return call->tx_top - tx_win < win_size; +} + /* * Wait for space to appear in the Tx queue or a signal to occur. */ @@ -26,9 +41,7 @@ static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx, { for (;;) { set_current_state(TASK_INTERRUPTIBLE); - if (call->tx_top - call->tx_hard_ack < - min_t(unsigned int, call->tx_winsize, - call->cong_cwnd + call->cong_extra)) + if (rxrpc_check_tx_space(call, NULL)) return 0;
if (call->state >= RXRPC_CALL_COMPLETE) @@ -68,9 +81,7 @@ static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, set_current_state(TASK_UNINTERRUPTIBLE);
tx_win = READ_ONCE(call->tx_hard_ack); - if (call->tx_top - tx_win < - min_t(unsigned int, call->tx_winsize, - call->cong_cwnd + call->cong_extra)) + if (rxrpc_check_tx_space(call, &tx_win)) return 0;
if (call->state >= RXRPC_CALL_COMPLETE) @@ -302,9 +313,7 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
_debug("alloc");
- if (call->tx_top - call->tx_hard_ack >= - min_t(unsigned int, call->tx_winsize, - call->cong_cwnd + call->cong_extra)) { + if (!rxrpc_check_tx_space(call, NULL)) { ret = -EAGAIN; if (msg->msg_flags & MSG_DONTWAIT) goto maybe_error;
From: David Howells dhowells@redhat.com
[ Upstream commit e138aa7d3271ac1b0690ae2c9b04d51468dce1d6 ]
Fix the interruptibility of kernel-initiated client calls so that they're either only interruptible when they're waiting for a call slot to come available or they're not interruptible at all. Either way, they're not interruptible during transmission.
This should help prevent StoreData calls from being interrupted when writeback is in progress. It doesn't, however, handle interruption during the receive phase.
Userspace-initiated calls are still interruptable. After the signal has been handled, sendmsg() will return the amount of data copied out of the buffer and userspace can perform another sendmsg() call to continue transmission.
Fixes: bc5e3a546d55 ("rxrpc: Use MSG_WAITALL to tell sendmsg() to temporarily ignore signals") Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/rxrpc.c | 3 ++- include/net/af_rxrpc.h | 8 +++++++- net/rxrpc/af_rxrpc.c | 4 ++-- net/rxrpc/ar-internal.h | 4 ++-- net/rxrpc/call_object.c | 3 +-- net/rxrpc/conn_client.c | 13 +++++++++--- net/rxrpc/sendmsg.c | 44 +++++++++++++++++++++++++++++++++-------- 7 files changed, 60 insertions(+), 19 deletions(-)
diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c index ef1d09f8920bd..52aa90fb4fbd9 100644 --- a/fs/afs/rxrpc.c +++ b/fs/afs/rxrpc.c @@ -414,7 +414,8 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp) afs_wake_up_async_call : afs_wake_up_call_waiter), call->upgrade, - call->intr, + (call->intr ? RXRPC_PREINTERRUPTIBLE : + RXRPC_UNINTERRUPTIBLE), call->debug_id); if (IS_ERR(rxcall)) { ret = PTR_ERR(rxcall); diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h index 299240df79e4a..04e97bab6f28b 100644 --- a/include/net/af_rxrpc.h +++ b/include/net/af_rxrpc.h @@ -16,6 +16,12 @@ struct sock; struct socket; struct rxrpc_call;
+enum rxrpc_interruptibility { + RXRPC_INTERRUPTIBLE, /* Call is interruptible */ + RXRPC_PREINTERRUPTIBLE, /* Call can be cancelled whilst waiting for a slot */ + RXRPC_UNINTERRUPTIBLE, /* Call should not be interruptible at all */ +}; + /* * Debug ID counter for tracing. */ @@ -41,7 +47,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *, gfp_t, rxrpc_notify_rx_t, bool, - bool, + enum rxrpc_interruptibility, unsigned int); int rxrpc_kernel_send_data(struct socket *, struct rxrpc_call *, struct msghdr *, size_t, diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index a293238fe1e7e..2921fc2767134 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -285,7 +285,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock, gfp_t gfp, rxrpc_notify_rx_t notify_rx, bool upgrade, - bool intr, + enum rxrpc_interruptibility interruptibility, unsigned int debug_id) { struct rxrpc_conn_parameters cp; @@ -310,7 +310,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock, memset(&p, 0, sizeof(p)); p.user_call_ID = user_call_ID; p.tx_total_len = tx_total_len; - p.intr = intr; + p.interruptibility = interruptibility;
memset(&cp, 0, sizeof(cp)); cp.local = rx->local; diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 394d18857979a..3eb1ab40ca5cb 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -489,7 +489,6 @@ enum rxrpc_call_flag { RXRPC_CALL_BEGAN_RX_TIMER, /* We began the expect_rx_by timer */ RXRPC_CALL_RX_HEARD, /* The peer responded at least once to this call */ RXRPC_CALL_RX_UNDERRUN, /* Got data underrun */ - RXRPC_CALL_IS_INTR, /* The call is interruptible */ RXRPC_CALL_DISCONNECTED, /* The call has been disconnected */ };
@@ -598,6 +597,7 @@ struct rxrpc_call { atomic_t usage; u16 service_id; /* service ID */ u8 security_ix; /* Security type */ + enum rxrpc_interruptibility interruptibility; /* At what point call may be interrupted */ u32 call_id; /* call ID on connection */ u32 cid; /* connection ID plus channel index */ int debug_id; /* debug ID for printks */ @@ -720,7 +720,7 @@ struct rxrpc_call_params { u32 normal; /* Max time since last call packet (msec) */ } timeouts; u8 nr_timeouts; /* Number of timeouts specified */ - bool intr; /* The call is interruptible */ + enum rxrpc_interruptibility interruptibility; /* How is interruptible is the call? */ };
struct rxrpc_send_params { diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index c9f34b0a11df4..f07970207b544 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -237,8 +237,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, return call; }
- if (p->intr) - __set_bit(RXRPC_CALL_IS_INTR, &call->flags); + call->interruptibility = p->interruptibility; call->tx_total_len = p->tx_total_len; trace_rxrpc_call(call->debug_id, rxrpc_call_new_client, atomic_read(&call->usage), diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index ea7d4c21f8893..f2a1a5dbb5a7b 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -655,13 +655,20 @@ static int rxrpc_wait_for_channel(struct rxrpc_call *call, gfp_t gfp)
add_wait_queue_exclusive(&call->waitq, &myself); for (;;) { - if (test_bit(RXRPC_CALL_IS_INTR, &call->flags)) + switch (call->interruptibility) { + case RXRPC_INTERRUPTIBLE: + case RXRPC_PREINTERRUPTIBLE: set_current_state(TASK_INTERRUPTIBLE); - else + break; + case RXRPC_UNINTERRUPTIBLE: + default: set_current_state(TASK_UNINTERRUPTIBLE); + break; + } if (call->call_id) break; - if (test_bit(RXRPC_CALL_IS_INTR, &call->flags) && + if ((call->interruptibility == RXRPC_INTERRUPTIBLE || + call->interruptibility == RXRPC_PREINTERRUPTIBLE) && signal_pending(current)) { ret = -ERESTARTSYS; break; diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index a13051d380973..1eccfb92c9e15 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -62,7 +62,7 @@ static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx, * Wait for space to appear in the Tx queue uninterruptibly, but with * a timeout of 2*RTT if no progress was made and a signal occurred. */ -static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, +static int rxrpc_wait_for_tx_window_waitall(struct rxrpc_sock *rx, struct rxrpc_call *call) { rxrpc_seq_t tx_start, tx_win; @@ -87,8 +87,7 @@ static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, if (call->state >= RXRPC_CALL_COMPLETE) return call->error;
- if (test_bit(RXRPC_CALL_IS_INTR, &call->flags) && - timeout == 0 && + if (timeout == 0 && tx_win == tx_start && signal_pending(current)) return -EINTR;
@@ -102,6 +101,26 @@ static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, } }
+/* + * Wait for space to appear in the Tx queue uninterruptibly. + */ +static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, + struct rxrpc_call *call, + long *timeo) +{ + for (;;) { + set_current_state(TASK_UNINTERRUPTIBLE); + if (rxrpc_check_tx_space(call, NULL)) + return 0; + + if (call->state >= RXRPC_CALL_COMPLETE) + return call->error; + + trace_rxrpc_transmit(call, rxrpc_transmit_wait); + *timeo = schedule_timeout(*timeo); + } +} + /* * wait for space to appear in the transmit/ACK window * - caller holds the socket locked @@ -119,10 +138,19 @@ static int rxrpc_wait_for_tx_window(struct rxrpc_sock *rx,
add_wait_queue(&call->waitq, &myself);
- if (waitall) - ret = rxrpc_wait_for_tx_window_nonintr(rx, call); - else - ret = rxrpc_wait_for_tx_window_intr(rx, call, timeo); + switch (call->interruptibility) { + case RXRPC_INTERRUPTIBLE: + if (waitall) + ret = rxrpc_wait_for_tx_window_waitall(rx, call); + else + ret = rxrpc_wait_for_tx_window_intr(rx, call, timeo); + break; + case RXRPC_PREINTERRUPTIBLE: + case RXRPC_UNINTERRUPTIBLE: + default: + ret = rxrpc_wait_for_tx_window_nonintr(rx, call, timeo); + break; + }
remove_wait_queue(&call->waitq, &myself); set_current_state(TASK_RUNNING); @@ -628,7 +656,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) .call.tx_total_len = -1, .call.user_call_ID = 0, .call.nr_timeouts = 0, - .call.intr = true, + .call.interruptibility = RXRPC_INTERRUPTIBLE, .abort_code = 0, .command = RXRPC_CMD_SEND_DATA, .exclusive = false,
From: David Howells dhowells@redhat.com
[ Upstream commit 498b577660f08cef5d9e78e0ed6dcd4c0939e98c ]
Fix the handling of sendmsg() with MSG_WAITALL for userspace to round the timeout for when a signal occurs up to at least two jiffies as a 1 jiffy timeout may end up being effectively 0 if jiffies wraps at the wrong time.
Fixes: bc5e3a546d55 ("rxrpc: Use MSG_WAITALL to tell sendmsg() to temporarily ignore signals") Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/rxrpc/sendmsg.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 1eccfb92c9e15..0fcf157aa09f8 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -71,8 +71,8 @@ static int rxrpc_wait_for_tx_window_waitall(struct rxrpc_sock *rx,
rtt = READ_ONCE(call->peer->rtt); rtt2 = nsecs_to_jiffies64(rtt) * 2; - if (rtt2 < 1) - rtt2 = 1; + if (rtt2 < 2) + rtt2 = 2;
timeout = rtt2; tx_start = READ_ONCE(call->tx_hard_ack);
From: Markus Fuchs mklntf@gmail.com
[ Upstream commit fc191af1bb0d069dc7e981076e8b80af21f1e61d ]
Not every stmmac based platform makes use of the eth_wake_irq or eth_lpi interrupts. Use the platform_get_irq_byname_optional variant for these interrupts, so no error message is displayed, if they can't be found. Rather print an information to hint something might be wrong to assist debugging on platforms which use these interrupts.
Signed-off-by: Markus Fuchs mklntf@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/stmicro/stmmac/stmmac_platform.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c index 5150551c28be3..508325cc105d5 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c @@ -663,16 +663,22 @@ int stmmac_get_platform_resources(struct platform_device *pdev, * In case the wake up interrupt is not passed from the platform * so the driver will continue to use the mac irq (ndev->irq) */ - stmmac_res->wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq"); + stmmac_res->wol_irq = + platform_get_irq_byname_optional(pdev, "eth_wake_irq"); if (stmmac_res->wol_irq < 0) { if (stmmac_res->wol_irq == -EPROBE_DEFER) return -EPROBE_DEFER; + dev_info(&pdev->dev, "IRQ eth_wake_irq not found\n"); stmmac_res->wol_irq = stmmac_res->irq; }
- stmmac_res->lpi_irq = platform_get_irq_byname(pdev, "eth_lpi"); - if (stmmac_res->lpi_irq == -EPROBE_DEFER) - return -EPROBE_DEFER; + stmmac_res->lpi_irq = + platform_get_irq_byname_optional(pdev, "eth_lpi"); + if (stmmac_res->lpi_irq < 0) { + if (stmmac_res->lpi_irq == -EPROBE_DEFER) + return -EPROBE_DEFER; + dev_info(&pdev->dev, "IRQ eth_lpi not found\n"); + }
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); stmmac_res->addr = devm_ioremap_resource(&pdev->dev, res);
From: Zheng Wei wei.zheng@vivo.com
[ Upstream commit b317538c47943f9903860d83cc0060409e12d2ff ]
printk in macro vxge_debug_ll uses __VA_ARGS__ without "##" prefix, it causes a build error when there is no variable arguments(e.g. only fmt is specified.).
Signed-off-by: Zheng Wei wei.zheng@vivo.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/neterion/vxge/vxge-config.h | 2 +- drivers/net/ethernet/neterion/vxge/vxge-main.h | 14 +++++++------- 2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/neterion/vxge/vxge-config.h b/drivers/net/ethernet/neterion/vxge/vxge-config.h index e678ba379598e..628fa9b2f7416 100644 --- a/drivers/net/ethernet/neterion/vxge/vxge-config.h +++ b/drivers/net/ethernet/neterion/vxge/vxge-config.h @@ -2045,7 +2045,7 @@ vxge_hw_vpath_strip_fcs_check(struct __vxge_hw_device *hldev, u64 vpath_mask); if ((level >= VXGE_ERR && VXGE_COMPONENT_LL & VXGE_DEBUG_ERR_MASK) || \ (level >= VXGE_TRACE && VXGE_COMPONENT_LL & VXGE_DEBUG_TRACE_MASK))\ if ((mask & VXGE_DEBUG_MASK) == mask) \ - printk(fmt "\n", __VA_ARGS__); \ + printk(fmt "\n", ##__VA_ARGS__); \ } while (0) #else #define vxge_debug_ll(level, mask, fmt, ...) diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.h b/drivers/net/ethernet/neterion/vxge/vxge-main.h index 59a57ff5e96af..9c86f4f9cd424 100644 --- a/drivers/net/ethernet/neterion/vxge/vxge-main.h +++ b/drivers/net/ethernet/neterion/vxge/vxge-main.h @@ -452,49 +452,49 @@ int vxge_fw_upgrade(struct vxgedev *vdev, char *fw_name, int override);
#if (VXGE_DEBUG_LL_CONFIG & VXGE_DEBUG_MASK) #define vxge_debug_ll_config(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_LL_CONFIG, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_LL_CONFIG, fmt, ##__VA_ARGS__) #else #define vxge_debug_ll_config(level, fmt, ...) #endif
#if (VXGE_DEBUG_INIT & VXGE_DEBUG_MASK) #define vxge_debug_init(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_INIT, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_INIT, fmt, ##__VA_ARGS__) #else #define vxge_debug_init(level, fmt, ...) #endif
#if (VXGE_DEBUG_TX & VXGE_DEBUG_MASK) #define vxge_debug_tx(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_TX, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_TX, fmt, ##__VA_ARGS__) #else #define vxge_debug_tx(level, fmt, ...) #endif
#if (VXGE_DEBUG_RX & VXGE_DEBUG_MASK) #define vxge_debug_rx(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_RX, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_RX, fmt, ##__VA_ARGS__) #else #define vxge_debug_rx(level, fmt, ...) #endif
#if (VXGE_DEBUG_MEM & VXGE_DEBUG_MASK) #define vxge_debug_mem(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_MEM, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_MEM, fmt, ##__VA_ARGS__) #else #define vxge_debug_mem(level, fmt, ...) #endif
#if (VXGE_DEBUG_ENTRYEXIT & VXGE_DEBUG_MASK) #define vxge_debug_entryexit(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_ENTRYEXIT, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_ENTRYEXIT, fmt, ##__VA_ARGS__) #else #define vxge_debug_entryexit(level, fmt, ...) #endif
#if (VXGE_DEBUG_INTR & VXGE_DEBUG_MASK) #define vxge_debug_intr(level, fmt, ...) \ - vxge_debug_ll(level, VXGE_DEBUG_INTR, fmt, __VA_ARGS__) + vxge_debug_ll(level, VXGE_DEBUG_INTR, fmt, ##__VA_ARGS__) #else #define vxge_debug_intr(level, fmt, ...) #endif
From: Luo bin luobin9@huawei.com
[ Upstream commit 96758117dc528e6d84bd23d205e8cf7f31eda029 ]
it's unreliable for fw to check whether IO is stopped, so driver wait for enough time to ensure IO process is done in hw before freeing resources
Signed-off-by: Luo bin luobin9@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/huawei/hinic/hinic_hw_dev.c | 51 +------------------ 1 file changed, 2 insertions(+), 49 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c index 79b3d53f2fbfa..c7c75b772a866 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c @@ -360,50 +360,6 @@ static int wait_for_db_state(struct hinic_hwdev *hwdev) return -EFAULT; }
-static int wait_for_io_stopped(struct hinic_hwdev *hwdev) -{ - struct hinic_cmd_io_status cmd_io_status; - struct hinic_hwif *hwif = hwdev->hwif; - struct pci_dev *pdev = hwif->pdev; - struct hinic_pfhwdev *pfhwdev; - unsigned long end; - u16 out_size; - int err; - - if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) { - dev_err(&pdev->dev, "Unsupported PCI Function type\n"); - return -EINVAL; - } - - pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev); - - cmd_io_status.func_idx = HINIC_HWIF_FUNC_IDX(hwif); - - end = jiffies + msecs_to_jiffies(IO_STATUS_TIMEOUT); - do { - err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM, - HINIC_COMM_CMD_IO_STATUS_GET, - &cmd_io_status, sizeof(cmd_io_status), - &cmd_io_status, &out_size, - HINIC_MGMT_MSG_SYNC); - if ((err) || (out_size != sizeof(cmd_io_status))) { - dev_err(&pdev->dev, "Failed to get IO status, ret = %d\n", - err); - return err; - } - - if (cmd_io_status.status == IO_STOPPED) { - dev_info(&pdev->dev, "IO stopped\n"); - return 0; - } - - msleep(20); - } while (time_before(jiffies, end)); - - dev_err(&pdev->dev, "Wait for IO stopped - Timeout\n"); - return -ETIMEDOUT; -} - /** * clear_io_resource - set the IO resources as not active in the NIC * @hwdev: the NIC HW device @@ -423,11 +379,8 @@ static int clear_io_resources(struct hinic_hwdev *hwdev) return -EINVAL; }
- err = wait_for_io_stopped(hwdev); - if (err) { - dev_err(&pdev->dev, "IO has not stopped yet\n"); - return err; - } + /* sleep 100ms to wait for firmware stopping I/O */ + msleep(100);
cmd_clear_io_res.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
From: Luo bin luobin9@huawei.com
[ Upstream commit 614eaa943e9fc3fcdbd4aa0692ae84973d363333 ]
should disable eq irq before freeing it, must clear event queue depth in hw before freeing relevant memory to avoid illegal memory access and update consumer idx to avoid invalid interrupt
Signed-off-by: Luo bin luobin9@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/huawei/hinic/hinic_hw_eqs.c | 24 +++++++++++++------ 1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c index 79243b626ddbe..6a723c4757bce 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c @@ -188,7 +188,7 @@ static u8 eq_cons_idx_checksum_set(u32 val) * eq_update_ci - update the HW cons idx of event queue * @eq: the event queue to update the cons idx for **/ -static void eq_update_ci(struct hinic_eq *eq) +static void eq_update_ci(struct hinic_eq *eq, u32 arm_state) { u32 val, addr = EQ_CONS_IDX_REG_ADDR(eq);
@@ -202,7 +202,7 @@ static void eq_update_ci(struct hinic_eq *eq)
val |= HINIC_EQ_CI_SET(eq->cons_idx, IDX) | HINIC_EQ_CI_SET(eq->wrapped, WRAPPED) | - HINIC_EQ_CI_SET(EQ_ARMED, INT_ARMED); + HINIC_EQ_CI_SET(arm_state, INT_ARMED);
val |= HINIC_EQ_CI_SET(eq_cons_idx_checksum_set(val), XOR_CHKSUM);
@@ -347,7 +347,7 @@ static void eq_irq_handler(void *data) else if (eq->type == HINIC_CEQ) ceq_irq_handler(eq);
- eq_update_ci(eq); + eq_update_ci(eq, EQ_ARMED); }
/** @@ -702,7 +702,7 @@ static int init_eq(struct hinic_eq *eq, struct hinic_hwif *hwif, }
set_eq_ctrls(eq); - eq_update_ci(eq); + eq_update_ci(eq, EQ_ARMED);
err = alloc_eq_pages(eq); if (err) { @@ -752,18 +752,28 @@ static int init_eq(struct hinic_eq *eq, struct hinic_hwif *hwif, **/ static void remove_eq(struct hinic_eq *eq) { - struct msix_entry *entry = &eq->msix_entry; - - free_irq(entry->vector, eq); + hinic_set_msix_state(eq->hwif, eq->msix_entry.entry, + HINIC_MSIX_DISABLE); + free_irq(eq->msix_entry.vector, eq);
if (eq->type == HINIC_AEQ) { struct hinic_eq_work *aeq_work = &eq->aeq_work;
cancel_work_sync(&aeq_work->work); + /* clear aeq_len to avoid hw access host memory */ + hinic_hwif_write_reg(eq->hwif, + HINIC_CSR_AEQ_CTRL_1_ADDR(eq->q_id), 0); } else if (eq->type == HINIC_CEQ) { tasklet_kill(&eq->ceq_tasklet); + /* clear ceq_len to avoid hw access host memory */ + hinic_hwif_write_reg(eq->hwif, + HINIC_CSR_CEQ_CTRL_1_ADDR(eq->q_id), 0); }
+ /* update cons_idx to avoid invalid interrupt */ + eq->cons_idx = hinic_hwif_read_reg(eq->hwif, EQ_PROD_IDX_REG_ADDR(eq)); + eq_update_ci(eq, EQ_NOT_ARMED); + free_eq_pages(eq); }
From: Luo bin luobin9@huawei.com
[ Upstream commit 33f15da216a1f4566b4ec880942556ace30615df ]
add read barrier in driver code to keep from reading other fileds in dma memory which is writable for hw until we have verified the memory is valid for driver
Signed-off-by: Luo bin luobin9@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 2 ++ drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c | 2 ++ drivers/net/ethernet/huawei/hinic/hinic_rx.c | 3 +++ drivers/net/ethernet/huawei/hinic/hinic_tx.c | 2 ++ 4 files changed, 9 insertions(+)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c index eb53c15b13f33..33f93cc25193a 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c @@ -623,6 +623,8 @@ static int cmdq_cmd_ceq_handler(struct hinic_cmdq *cmdq, u16 ci, if (!CMDQ_WQE_COMPLETED(be32_to_cpu(ctrl->ctrl_info))) return -EBUSY;
+ dma_rmb(); + errcode = CMDQ_WQE_ERRCODE_GET(be32_to_cpu(status->status_info), VAL);
cmdq_sync_cmd_handler(cmdq, ci, errcode); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c index 6a723c4757bce..c0b6bcb067cd4 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c @@ -235,6 +235,8 @@ static void aeq_irq_handler(struct hinic_eq *eq) if (HINIC_EQ_ELEM_DESC_GET(aeqe_desc, WRAPPED) == eq->wrapped) break;
+ dma_rmb(); + event = HINIC_EQ_ELEM_DESC_GET(aeqe_desc, TYPE); if (event >= HINIC_MAX_AEQ_EVENTS) { dev_err(&pdev->dev, "Unknown AEQ Event %d\n", event); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_rx.c b/drivers/net/ethernet/huawei/hinic/hinic_rx.c index 2695ad69fca60..815649e37cb15 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_rx.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_rx.c @@ -350,6 +350,9 @@ static int rxq_recv(struct hinic_rxq *rxq, int budget) if (!rq_wqe) break;
+ /* make sure we read rx_done before packet length */ + dma_rmb(); + cqe = rq->cqe[ci]; status = be32_to_cpu(cqe->status); hinic_rq_get_sge(rxq->rq, rq_wqe, ci, &sge); diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c index 0e13d1c7e4746..375d81d03e866 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c @@ -622,6 +622,8 @@ static int free_tx_poll(struct napi_struct *napi, int budget) do { hw_ci = HW_CONS_IDX(sq) & wq->mask;
+ dma_rmb(); + /* Reading a WQEBB to get real WQE size and consumer index. */ sq_wqe = hinic_sq_read_wqebb(sq, &skb, &wqe_size, &sw_ci); if ((!sq_wqe) ||
From: Luo bin luobin9@huawei.com
[ Upstream commit 0da7c322f116210ebfdda59c7da663a6fc5e9cc8 ]
the second input parameter of wait_for_completion_timeout should be jiffies instead of millisecond
Signed-off-by: Luo bin luobin9@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 3 ++- drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c | 5 +++-- 2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c index 33f93cc25193a..5f2d57d1b2d37 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c @@ -389,7 +389,8 @@ static int cmdq_sync_cmd_direct_resp(struct hinic_cmdq *cmdq,
spin_unlock_bh(&cmdq->cmdq_lock);
- if (!wait_for_completion_timeout(&done, CMDQ_TIMEOUT)) { + if (!wait_for_completion_timeout(&done, + msecs_to_jiffies(CMDQ_TIMEOUT))) { spin_lock_bh(&cmdq->cmdq_lock);
if (cmdq->errcode[curr_prod_idx] == &errcode) diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c index c1a6be6bf6a8c..8995e32dd1c00 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c @@ -43,7 +43,7 @@
#define MSG_NOT_RESP 0xFFFF
-#define MGMT_MSG_TIMEOUT 1000 +#define MGMT_MSG_TIMEOUT 5000
#define mgmt_to_pfhwdev(pf_mgmt) \ container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt) @@ -267,7 +267,8 @@ static int msg_to_mgmt_sync(struct hinic_pf_to_mgmt *pf_to_mgmt, goto unlock_sync_msg; }
- if (!wait_for_completion_timeout(recv_done, MGMT_MSG_TIMEOUT)) { + if (!wait_for_completion_timeout(recv_done, + msecs_to_jiffies(MGMT_MSG_TIMEOUT))) { dev_err(&pdev->dev, "MGMT timeout, MSG id = %d\n", msg_id); err = -ETIMEDOUT; goto unlock_sync_msg;
From: Luo bin luobin9@huawei.com
[ Upstream commit 7296695fc16dd1761dbba8b68a9181c71cef0633 ]
the minimum value of skb len that hw supports is 32 rather than 17
Signed-off-by: Luo bin luobin9@huawei.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/huawei/hinic/hinic_tx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c index 375d81d03e866..365016450bdbe 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c @@ -45,7 +45,7 @@
#define HW_CONS_IDX(sq) be16_to_cpu(*(u16 *)((sq)->hw_ci_addr))
-#define MIN_SKB_LEN 17 +#define MIN_SKB_LEN 32
#define MAX_PAYLOAD_OFFSET 221 #define TRANSPORT_OFFSET(l4_hdr, skb) ((u32)((l4_hdr) - (skb)->data))
From: Alan Maguire alan.maguire@oracle.com
[ Upstream commit 83a9b6f639e9f6b632337f9776de17d51d969c77 ]
Many systems build/test up-to-date kernels with older libcs, and an older glibc (2.17) lacks the definition of SOL_DCCP in /usr/include/bits/socket.h (it was added in the 4.6 timeframe).
Adding the definition to the test program avoids a compilation failure that gets in the way of building tools/testing/selftests/net. The test itself will work once the definition is added; either skipping due to DCCP not being configured in the kernel under test or passing, so there are no other more up-to-date glibc dependencies here it seems beyond that missing definition.
Fixes: 11fb60d1089f ("selftests: net: reuseport_addr_any: add DCCP") Signed-off-by: Alan Maguire alan.maguire@oracle.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/net/reuseport_addr_any.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/tools/testing/selftests/net/reuseport_addr_any.c b/tools/testing/selftests/net/reuseport_addr_any.c index c6233935fed14..b8475cb29be7a 100644 --- a/tools/testing/selftests/net/reuseport_addr_any.c +++ b/tools/testing/selftests/net/reuseport_addr_any.c @@ -21,6 +21,10 @@ #include <sys/socket.h> #include <unistd.h>
+#ifndef SOL_DCCP +#define SOL_DCCP 269 +#endif + static const char *IP4_ADDR = "127.0.0.1"; static const char *IP6_ADDR = "::1"; static const char *IP4_MAPPED6 = "::ffff:127.0.0.1";
From: Mike Marciniszyn mike.marciniszyn@intel.com
[ Upstream commit 9a293d1e21a6461a11b4217b155bf445e57f4131 ]
The following warning can occur when a pq is left on the dmawait list and the pq is then freed:
WARNING: CPU: 47 PID: 3546 at lib/list_debug.c:29 __list_add+0x65/0xc0 list_add corruption. next->prev should be prev (ffff939228da1880), but was ffff939cabb52230. (next=ffff939cabb52230). Modules linked in: mmfs26(OE) mmfslinux(OE) tracedev(OE) 8021q garp mrp ib_isert iscsi_target_mod target_core_mod crc_t10dif crct10dif_generic opa_vnic rpcrdma ib_iser libiscsi scsi_transport_iscsi ib_ipoib(OE) bridge stp llc iTCO_wdt iTCO_vendor_support intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crct10dif_pclmul crct10dif_common crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ast ttm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops drm pcspkr joydev drm_panel_orientation_quirks i2c_i801 mei_me lpc_ich mei wmi ipmi_si ipmi_devintf ipmi_msghandler nfit libnvdimm acpi_power_meter acpi_pad hfi1(OE) rdmavt(OE) rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_core binfmt_misc numatools(OE) xpmem(OE) ip_tables nfsv3 nfs_acl nfs lockd grace sunrpc fscache igb ahci libahci i2c_algo_bit dca libata ptp pps_core crc32c_intel [last unloaded: i2c_algo_bit] CPU: 47 PID: 3546 Comm: wrf.exe Kdump: loaded Tainted: G W OE ------------ 3.10.0-957.41.1.el7.x86_64 #1 Hardware name: HPE.COM HPE SGI 8600-XA730i Gen10/X11DPT-SB-SG007, BIOS SBED1229 01/22/2019 Call Trace: [<ffffffff91f65ac0>] dump_stack+0x19/0x1b [<ffffffff91898b78>] __warn+0xd8/0x100 [<ffffffff91898bff>] warn_slowpath_fmt+0x5f/0x80 [<ffffffff91a1dabe>] ? ___slab_alloc+0x24e/0x4f0 [<ffffffff91b97025>] __list_add+0x65/0xc0 [<ffffffffc03926a5>] defer_packet_queue+0x145/0x1a0 [hfi1] [<ffffffffc0372987>] sdma_check_progress+0x67/0xa0 [hfi1] [<ffffffffc03779d2>] sdma_send_txlist+0x432/0x550 [hfi1] [<ffffffff91a20009>] ? kmem_cache_alloc+0x179/0x1f0 [<ffffffffc0392973>] ? user_sdma_send_pkts+0xc3/0x1990 [hfi1] [<ffffffffc0393e3a>] user_sdma_send_pkts+0x158a/0x1990 [hfi1] [<ffffffff918ab65e>] ? try_to_del_timer_sync+0x5e/0x90 [<ffffffff91a3fe1a>] ? __check_object_size+0x1ca/0x250 [<ffffffffc0395546>] hfi1_user_sdma_process_request+0xd66/0x1280 [hfi1] [<ffffffffc034e0da>] hfi1_aio_write+0xca/0x120 [hfi1] [<ffffffff91a4245b>] do_sync_readv_writev+0x7b/0xd0 [<ffffffff91a4409e>] do_readv_writev+0xce/0x260 [<ffffffff918df69f>] ? pick_next_task_fair+0x5f/0x1b0 [<ffffffff918db535>] ? sched_clock_cpu+0x85/0xc0 [<ffffffff91f6b16a>] ? __schedule+0x13a/0x860 [<ffffffff91a442c5>] vfs_writev+0x35/0x60 [<ffffffff91a4447f>] SyS_writev+0x7f/0x110 [<ffffffff91f78ddb>] system_call_fastpath+0x22/0x27
The issue happens when wait_event_interruptible_timeout() returns a value <= 0.
In that case, the pq is left on the list. The code continues sending packets and potentially can complete the current request with the pq still on the dmawait list provided no descriptor shortage is seen.
If the pq is torn down in that state, the sdma interrupt handler could find the now freed pq on the list with list corruption or memory corruption resulting.
Fix by adding a flush routine to ensure that the pq is never on a list after processing a request.
A follow-up patch series will address issues with seqlock surfaced in: https://lore.kernel.org/r/20200320003129.GP20941@ziepe.ca
The seqlock use for sdma will then be converted to a spin lock since the list_empty() doesn't need the protection afforded by the sequence lock currently in use.
Fixes: a0d406934a46 ("staging/rdma/hfi1: Add page lock limit check for SDMA requests") Link: https://lore.kernel.org/r/20200320200200.23203.37777.stgit@awfm-01.aw.intel.... Reviewed-by: Kaike Wan kaike.wan@intel.com Signed-off-by: Mike Marciniszyn mike.marciniszyn@intel.com Signed-off-by: Dennis Dalessandro dennis.dalessandro@intel.com Signed-off-by: Jason Gunthorpe jgg@mellanox.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/hfi1/user_sdma.c | 25 ++++++++++++++++++++++--- 1 file changed, 22 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c index c2f0d9ba93de1..13e4203497b33 100644 --- a/drivers/infiniband/hw/hfi1/user_sdma.c +++ b/drivers/infiniband/hw/hfi1/user_sdma.c @@ -141,6 +141,7 @@ static int defer_packet_queue( */ xchg(&pq->state, SDMA_PKT_Q_DEFERRED); if (list_empty(&pq->busy.list)) { + pq->busy.lock = &sde->waitlock; iowait_get_priority(&pq->busy); iowait_queue(pkts_sent, &pq->busy, &sde->dmawait); } @@ -155,6 +156,7 @@ static void activate_packet_queue(struct iowait *wait, int reason) { struct hfi1_user_sdma_pkt_q *pq = container_of(wait, struct hfi1_user_sdma_pkt_q, busy); + pq->busy.lock = NULL; xchg(&pq->state, SDMA_PKT_Q_ACTIVE); wake_up(&wait->wait_dma); }; @@ -256,6 +258,21 @@ int hfi1_user_sdma_alloc_queues(struct hfi1_ctxtdata *uctxt, return ret; }
+static void flush_pq_iowait(struct hfi1_user_sdma_pkt_q *pq) +{ + unsigned long flags; + seqlock_t *lock = pq->busy.lock; + + if (!lock) + return; + write_seqlock_irqsave(lock, flags); + if (!list_empty(&pq->busy.list)) { + list_del_init(&pq->busy.list); + pq->busy.lock = NULL; + } + write_sequnlock_irqrestore(lock, flags); +} + int hfi1_user_sdma_free_queues(struct hfi1_filedata *fd, struct hfi1_ctxtdata *uctxt) { @@ -281,6 +298,7 @@ int hfi1_user_sdma_free_queues(struct hfi1_filedata *fd, kfree(pq->reqs); kfree(pq->req_in_use); kmem_cache_destroy(pq->txreq_cache); + flush_pq_iowait(pq); kfree(pq); } else { spin_unlock(&fd->pq_rcu_lock); @@ -587,11 +605,12 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd, if (ret < 0) { if (ret != -EBUSY) goto free_req; - wait_event_interruptible_timeout( + if (wait_event_interruptible_timeout( pq->busy.wait_dma, - (pq->state == SDMA_PKT_Q_ACTIVE), + pq->state == SDMA_PKT_Q_ACTIVE, msecs_to_jiffies( - SDMA_IOWAIT_TIMEOUT)); + SDMA_IOWAIT_TIMEOUT)) <= 0) + flush_pq_iowait(pq); } } *count += idx;
From: Pablo Neira Ayuso pablo@netfilter.org
[ Upstream commit 8c2d45b2b65ca1f215244be1c600236e83f9815f ]
Currently, the -EEXIST return code of ->insert() callbacks is ambiguous: it might indicate that a given element (including intervals) already exists as such, or that the new element would clash with existing ones.
If identical elements already exist, the front-end is ignoring this without returning error, in case NLM_F_EXCL is not set. However, if the new element can't be inserted due an overlap, we should report this to the user.
To this purpose, allow set back-ends to return -ENOTEMPTY on collision with existing elements, translate that to -EEXIST, and return that to userspace, no matter if NLM_F_EXCL was set.
Reported-by: Phil Sutter phil@nwl.cc Signed-off-by: Stefano Brivio sbrivio@redhat.com Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nf_tables_api.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 068daff41f6e6..ddaaf5e535d58 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -4689,6 +4689,11 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, err = -EBUSY; else if (!(nlmsg_flags & NLM_F_EXCL)) err = 0; + } else if (err == -ENOTEMPTY) { + /* ENOTEMPTY reports overlapping between this element + * and an existing one. + */ + err = -EEXIST; } goto err5; }
From: Stefano Brivio sbrivio@redhat.com
[ Upstream commit 6f7c9caf017be8ab0fe3b99509580d0793bf0833 ]
Replace negations of nft_rbtree_interval_end() with a new helper, nft_rbtree_interval_start(), wherever this helps to visualise the problem at hand, that is, for all the occurrences except for the comparison against given flags in __nft_rbtree_get().
This gets especially useful in the next patch.
Signed-off-by: Stefano Brivio sbrivio@redhat.com Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nft_set_rbtree.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c index a9f804f7a04ac..95fcba34bfd35 100644 --- a/net/netfilter/nft_set_rbtree.c +++ b/net/netfilter/nft_set_rbtree.c @@ -33,6 +33,11 @@ static bool nft_rbtree_interval_end(const struct nft_rbtree_elem *rbe) (*nft_set_ext_flags(&rbe->ext) & NFT_SET_ELEM_INTERVAL_END); }
+static bool nft_rbtree_interval_start(const struct nft_rbtree_elem *rbe) +{ + return !nft_rbtree_interval_end(rbe); +} + static bool nft_rbtree_equal(const struct nft_set *set, const void *this, const struct nft_rbtree_elem *interval) { @@ -64,7 +69,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set if (interval && nft_rbtree_equal(set, this, interval) && nft_rbtree_interval_end(rbe) && - !nft_rbtree_interval_end(interval)) + nft_rbtree_interval_start(interval)) continue; interval = rbe; } else if (d > 0) @@ -89,7 +94,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
if (set->flags & NFT_SET_INTERVAL && interval != NULL && nft_set_elem_active(&interval->ext, genmask) && - !nft_rbtree_interval_end(interval)) { + nft_rbtree_interval_start(interval)) { *ext = &interval->ext; return true; } @@ -224,9 +229,9 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, p = &parent->rb_right; else { if (nft_rbtree_interval_end(rbe) && - !nft_rbtree_interval_end(new)) { + nft_rbtree_interval_start(new)) { p = &parent->rb_left; - } else if (!nft_rbtree_interval_end(rbe) && + } else if (nft_rbtree_interval_start(rbe) && nft_rbtree_interval_end(new)) { p = &parent->rb_right; } else if (nft_set_elem_active(&rbe->ext, genmask)) { @@ -317,10 +322,10 @@ static void *nft_rbtree_deactivate(const struct net *net, parent = parent->rb_right; else { if (nft_rbtree_interval_end(rbe) && - !nft_rbtree_interval_end(this)) { + nft_rbtree_interval_start(this)) { parent = parent->rb_left; continue; - } else if (!nft_rbtree_interval_end(rbe) && + } else if (nft_rbtree_interval_start(rbe) && nft_rbtree_interval_end(this)) { parent = parent->rb_right; continue;
From: Stefano Brivio sbrivio@redhat.com
[ Upstream commit 7c84d41416d836ef7e533bd4d64ccbdf40c5ac70 ]
...and return -ENOTEMPTY to the front-end in this case, instead of proceeding. Currently, nft takes care of checking for these cases and not sending them to the kernel, but if we drop the set_overlap() call in nft we can end up in situations like:
# nft add table t # nft add set t s '{ type inet_service ; flags interval ; }' # nft add element t s '{ 1 - 5 }' # nft add element t s '{ 6 - 10 }' # nft add element t s '{ 4 - 7 }' # nft list set t s table ip t { set s { type inet_service flags interval elements = { 1-3, 4-5, 6-7 } } }
This change has the primary purpose of making the behaviour consistent with nft_set_pipapo, but is also functional to avoid inconsistent behaviour if userspace sends overlapping elements for any reason.
v2: When we meet the same key data in the tree, as start element while inserting an end element, or as end element while inserting a start element, actually check that the existing element is active, before resetting the overlap flag (Pablo Neira Ayuso)
Signed-off-by: Stefano Brivio sbrivio@redhat.com Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nft_set_rbtree.c | 70 ++++++++++++++++++++++++++++++++-- 1 file changed, 67 insertions(+), 3 deletions(-)
diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c index 95fcba34bfd35..996fd9dc6160c 100644 --- a/net/netfilter/nft_set_rbtree.c +++ b/net/netfilter/nft_set_rbtree.c @@ -213,8 +213,43 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, u8 genmask = nft_genmask_next(net); struct nft_rbtree_elem *rbe; struct rb_node *parent, **p; + bool overlap = false; int d;
+ /* Detect overlaps as we descend the tree. Set the flag in these cases: + * + * a1. |__ _ _? >|__ _ _ (insert start after existing start) + * a2. _ _ __>| ?_ _ __| (insert end before existing end) + * a3. _ _ ___| ?_ _ _>| (insert end after existing end) + * a4. >|__ _ _ _ _ __| (insert start before existing end) + * + * and clear it later on, as we eventually reach the points indicated by + * '?' above, in the cases described below. We'll always meet these + * later, locally, due to tree ordering, and overlaps for the intervals + * that are the closest together are always evaluated last. + * + * b1. |__ _ _! >|__ _ _ (insert start after existing end) + * b2. _ _ __>| !_ _ __| (insert end before existing start) + * b3. !_____>| (insert end after existing start) + * + * Case a4. resolves to b1.: + * - if the inserted start element is the leftmost, because the '0' + * element in the tree serves as end element + * - otherwise, if an existing end is found. Note that end elements are + * always inserted after corresponding start elements. + * + * For a new, rightmost pair of elements, we'll hit cases b1. and b3., + * in that order. + * + * The flag is also cleared in two special cases: + * + * b4. |__ _ _!|<_ _ _ (insert start right before existing end) + * b5. |__ _ >|!__ _ _ (insert end right after existing start) + * + * which always happen as last step and imply that no further + * overlapping is possible. + */ + parent = NULL; p = &priv->root.rb_node; while (*p != NULL) { @@ -223,17 +258,42 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, d = memcmp(nft_set_ext_key(&rbe->ext), nft_set_ext_key(&new->ext), set->klen); - if (d < 0) + if (d < 0) { p = &parent->rb_left; - else if (d > 0) + + if (nft_rbtree_interval_start(new)) { + overlap = nft_rbtree_interval_start(rbe) && + nft_set_elem_active(&rbe->ext, + genmask); + } else { + overlap = nft_rbtree_interval_end(rbe) && + nft_set_elem_active(&rbe->ext, + genmask); + } + } else if (d > 0) { p = &parent->rb_right; - else { + + if (nft_rbtree_interval_end(new)) { + overlap = nft_rbtree_interval_end(rbe) && + nft_set_elem_active(&rbe->ext, + genmask); + } else if (nft_rbtree_interval_end(rbe) && + nft_set_elem_active(&rbe->ext, genmask)) { + overlap = true; + } + } else { if (nft_rbtree_interval_end(rbe) && nft_rbtree_interval_start(new)) { p = &parent->rb_left; + + if (nft_set_elem_active(&rbe->ext, genmask)) + overlap = false; } else if (nft_rbtree_interval_start(rbe) && nft_rbtree_interval_end(new)) { p = &parent->rb_right; + + if (nft_set_elem_active(&rbe->ext, genmask)) + overlap = false; } else if (nft_set_elem_active(&rbe->ext, genmask)) { *ext = &rbe->ext; return -EEXIST; @@ -242,6 +302,10 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, } } } + + if (overlap) + return -ENOTEMPTY; + rb_link_node_rcu(&new->node, parent, p); rb_insert_color(&new->node, &priv->root); return 0;
From: Raju Rangoju rajur@chelsio.com
[ Upstream commit 50e0d28d3808146cc19b0d5564ef4ba9e5bf3846 ]
cxgb4_ptp_fineadjtime() doesn't pass the signedness of offset delta in FW_PTP_CMD. Fix it by passing correct sign.
Signed-off-by: Raju Rangoju rajur@chelsio.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c index 58a039c3224ad..af1f40cbccc88 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c +++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c @@ -246,6 +246,9 @@ static int cxgb4_ptp_fineadjtime(struct adapter *adapter, s64 delta) FW_PTP_CMD_PORTID_V(0)); c.retval_len16 = cpu_to_be32(FW_CMD_LEN16_V(sizeof(c) / 16)); c.u.ts.sc = FW_PTP_SC_ADJ_FTIME; + c.u.ts.sign = (delta < 0) ? 1 : 0; + if (delta < 0) + delta = -delta; c.u.ts.tm = cpu_to_be64(delta);
err = t4_wr_mbox(adapter, adapter->mbox, &c, sizeof(c), NULL);
From: Yintian Tao yttao@amd.com
[ Upstream commit 77bb2f204f1f0a53a602a8fd15816d6826212077 ]
There is one one corner case at dma_fence_signal_locked which will raise the NULL pointer problem just like below. ->dma_fence_signal ->dma_fence_signal_locked ->test_and_set_bit here trigger dma_fence_release happen due to the zero of fence refcount.
->dma_fence_put ->dma_fence_release ->drm_sched_fence_release_scheduled ->call_rcu here make the union fled “cb_list” at finished fence to NULL because struct rcu_head contains two pointer which is same as struct list_head cb_list
Therefore, to hold the reference of finished fence at drm_sched_process_job to prevent the null pointer during finished fence dma_fence_signal
[ 732.912867] BUG: kernel NULL pointer dereference, address: 0000000000000008 [ 732.914815] #PF: supervisor write access in kernel mode [ 732.915731] #PF: error_code(0x0002) - not-present page [ 732.916621] PGD 0 P4D 0 [ 732.917072] Oops: 0002 [#1] SMP PTI [ 732.917682] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G OE 5.4.0-rc7 #1 [ 732.918980] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014 [ 732.920906] RIP: 0010:dma_fence_signal_locked+0x3e/0x100 [ 732.938569] Call Trace: [ 732.939003] <IRQ> [ 732.939364] dma_fence_signal+0x29/0x50 [ 732.940036] drm_sched_fence_finished+0x12/0x20 [gpu_sched] [ 732.940996] drm_sched_process_job+0x34/0xa0 [gpu_sched] [ 732.941910] dma_fence_signal_locked+0x85/0x100 [ 732.942692] dma_fence_signal+0x29/0x50 [ 732.943457] amdgpu_fence_process+0x99/0x120 [amdgpu] [ 732.944393] sdma_v4_0_process_trap_irq+0x81/0xa0 [amdgpu]
v2: hold the finished fence at drm_sched_process_job instead of amdgpu_fence_process v3: resume the blank line
Signed-off-by: Yintian Tao yttao@amd.com Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/scheduler/sched_main.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 2af64459b3d77..dfb29e6eeff1e 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -627,7 +627,9 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb)
trace_drm_sched_process_job(s_fence);
+ dma_fence_get(&s_fence->finished); drm_sched_fence_finished(s_fence); + dma_fence_put(&s_fence->finished); wake_up_interruptible(&sched->wake_up_worker); }
From: Ilan Peer ilan.peer@intel.com
[ Upstream commit 05dcb8bb258575a8dd3499d0d78bd2db633c2b23 ]
When cfg80211_update_assoc_bss_entry() is called, there is a verification that the BSS channel actually changed. As some APs use CSA also for bandwidth changes, this would result with a kernel warning.
Fix this by removing the WARN_ON().
Signed-off-by: Ilan Peer ilan.peer@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Link: https://lore.kernel.org/r/iwlwifi.20200326150855.96316ada0e8d.I6710376b1b425... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/wireless/scan.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/net/wireless/scan.c b/net/wireless/scan.c index aef240fdf8df6..328402ab64a3f 100644 --- a/net/wireless/scan.c +++ b/net/wireless/scan.c @@ -2022,7 +2022,11 @@ void cfg80211_update_assoc_bss_entry(struct wireless_dev *wdev,
spin_lock_bh(&rdev->bss_lock);
- if (WARN_ON(cbss->pub.channel == chan)) + /* + * Some APs use CSA also for bandwidth changes, i.e., without actually + * changing the control channel, so no need to update in such a case. + */ + if (cbss->pub.channel == chan) goto done;
/* use transmitting bss */
From: Xu Wang vulab@iscas.ac.cn
[ Upstream commit bcaeb886ade124331a6f3a5cef34a3f1484c0a03 ]
In qlcnic_83xx_get_reset_instruction_template, the variable of null test is bad, so correct it.
Signed-off-by: Xu Wang vulab@iscas.ac.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c index 07f9067affc65..cda5b0a9e9489 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c @@ -1720,7 +1720,7 @@ static int qlcnic_83xx_get_reset_instruction_template(struct qlcnic_adapter *p_d
ahw->reset.seq_error = 0; ahw->reset.buff = kzalloc(QLC_83XX_RESTART_TEMPLATE_SIZE, GFP_KERNEL); - if (p_dev->ahw->reset.buff == NULL) + if (ahw->reset.buff == NULL) return -ENOMEM;
p_buff = p_dev->ahw->reset.buff;
From: Alain Volmat avolmat@me.com
[ Upstream commit f491c6687332920e296d0209e366fe2ca7eab1c6 ]
Fix a missing struct parameter description to allow warning free W=1 compilation.
Signed-off-by: Alain Volmat avolmat@me.com Reviewed-by: Patrice Chotard patrice.chotard@st.com Signed-off-by: Wolfram Sang wsa@the-dreams.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/i2c/busses/i2c-st.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/i2c/busses/i2c-st.c b/drivers/i2c/busses/i2c-st.c index 54e1fc8a495e7..f7f7b5b64720e 100644 --- a/drivers/i2c/busses/i2c-st.c +++ b/drivers/i2c/busses/i2c-st.c @@ -434,6 +434,7 @@ static void st_i2c_wr_fill_tx_fifo(struct st_i2c_dev *i2c_dev) /** * st_i2c_rd_fill_tx_fifo() - Fill the Tx FIFO in read mode * @i2c_dev: Controller's private data + * @max: Maximum amount of data to fill into the Tx FIFO * * This functions fills the Tx FIFO with fixed pattern when * in read mode to trigger clock.
From: Chris Packham chris.packham@alliedtelesis.co.nz
[ Upstream commit 14c1fe699cad9cb0acda4559c584f136d18fea50 ]
The interrupt is not required so use platform_irq_get_optional() to avoid error messages like
i2c-pca-platform 22080000.i2c: IRQ index 0 not found
Signed-off-by: Chris Packham chris.packham@alliedtelesis.co.nz Signed-off-by: Wolfram Sang wsa@the-dreams.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/i2c/busses/i2c-pca-platform.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/i2c/busses/i2c-pca-platform.c b/drivers/i2c/busses/i2c-pca-platform.c index a7a81846d5b1d..635dd697ac0bb 100644 --- a/drivers/i2c/busses/i2c-pca-platform.c +++ b/drivers/i2c/busses/i2c-pca-platform.c @@ -140,7 +140,7 @@ static int i2c_pca_pf_probe(struct platform_device *pdev) int ret = 0; int irq;
- irq = platform_get_irq(pdev, 0); + irq = platform_get_irq_optional(pdev, 0); /* If irq is 0, we do polling. */ if (irq < 0) irq = 0;
linux-stable-mirror@lists.linaro.org