From: Vineet Gupta vgupta@synopsys.com
[ Upstream commit fe81d927b78c4f0557836661d32e41ebc957b024 ]
Newer version of HSDK aka HSDK-4xD (with dual issue HS48x4 CPU) wired up the perf interrupt, so enable that in DT. This is OK for old HSDK where this irq is ignored because pct irq is not wired up in hardware.
Signed-off-by: Vineet Gupta vgupta@synopsys.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arc/boot/dts/hsdk.dts | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/arc/boot/dts/hsdk.dts b/arch/arc/boot/dts/hsdk.dts index 9acbeba832c0b..5d64a5a940ee6 100644 --- a/arch/arc/boot/dts/hsdk.dts +++ b/arch/arc/boot/dts/hsdk.dts @@ -88,6 +88,8 @@ idu_intc: idu-interrupt-controller {
arcpct: pct { compatible = "snps,archs-pct"; + interrupt-parent = <&cpu_intc>; + interrupts = <20>; };
/* TIMER0 with interrupt for clockevent */
From: Hanjun Guo guohanjun@huawei.com
[ Upstream commit 7eb48dd094de5fe0e216b550e73aa85257903973 ]
The acpi_get_table() should be coupled with acpi_put_table() if the mapped table is not used at runtime to release the table mapping, put the CSRT table buf after using it.
Signed-off-by: Hanjun Guo guohanjun@huawei.com Link: https://lore.kernel.org/r/1595411661-15936-1-git-send-email-guohanjun@huawei... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/acpi-dma.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/dma/acpi-dma.c b/drivers/dma/acpi-dma.c index 8a05db3343d39..dcbcb712de6e8 100644 --- a/drivers/dma/acpi-dma.c +++ b/drivers/dma/acpi-dma.c @@ -135,11 +135,13 @@ static void acpi_dma_parse_csrt(struct acpi_device *adev, struct acpi_dma *adma) if (ret < 0) { dev_warn(&adev->dev, "error in parsing resource group\n"); - return; + break; }
grp = (struct acpi_csrt_group *)((void *)grp + grp->length); } + + acpi_put_table((struct acpi_table_header *)csrt); }
/**
From: Jiaxun Yang jiaxun.yang@flygoat.com
[ Upstream commit 433c1ca0d441ee0b88fdd83c84ee6d6d43080dcd ]
Do not override ejtag feature to 0 as Loongson 3A1000+ do have ejtag. For watch, as KVM emulated CPU doesn't have watch feature, we should not enable it unconditionally.
Signed-off-by: Jiaxun Yang jiaxun.yang@flygoat.com Reviewed-by: Huacai Chen chenhc@lemote.com Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Signed-off-by: Sasha Levin sashal@kernel.org --- arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h | 2 -- 1 file changed, 2 deletions(-)
diff --git a/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h b/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h index b6e9c99b85a52..eb181224eb4c4 100644 --- a/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h +++ b/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h @@ -26,7 +26,6 @@ #define cpu_has_counter 1 #define cpu_has_dc_aliases (PAGE_SIZE < 0x4000) #define cpu_has_divec 0 -#define cpu_has_ejtag 0 #define cpu_has_inclusive_pcaches 1 #define cpu_has_llsc 1 #define cpu_has_mcheck 0 @@ -42,7 +41,6 @@ #define cpu_has_veic 0 #define cpu_has_vint 0 #define cpu_has_vtag_icache 0 -#define cpu_has_watch 1 #define cpu_has_wsbh 1 #define cpu_has_ic_fills_f_dc 1 #define cpu_hwrena_impl_bits 0xc0000000
From: Florian Westphal fw@strlen.de
[ Upstream commit cc5453a5b7e90c39f713091a7ebc53c1f87d1700 ]
If an sctp connection gets re-used, heartbeats are flagged as invalid because their vtag doesn't match.
Handle this in a similar way as TCP conntrack when it suspects that the endpoints and conntrack are out-of-sync.
When a HEARTBEAT request fails its vtag validation, flag this in the conntrack state and accept the packet.
When a HEARTBEAT_ACK is received with an invalid vtag in the reverse direction after we allowed such a HEARTBEAT through, assume we are out-of-sync and re-set the vtag info.
v2: remove left-over snippet from an older incarnation that moved new_state/old_state assignments, thats not needed so keep that as-is.
Signed-off-by: Florian Westphal fw@strlen.de Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/netfilter/nf_conntrack_sctp.h | 2 ++ net/netfilter/nf_conntrack_proto_sctp.c | 39 ++++++++++++++++++--- 2 files changed, 37 insertions(+), 4 deletions(-)
diff --git a/include/linux/netfilter/nf_conntrack_sctp.h b/include/linux/netfilter/nf_conntrack_sctp.h index 9a33f171aa822..625f491b95de8 100644 --- a/include/linux/netfilter/nf_conntrack_sctp.h +++ b/include/linux/netfilter/nf_conntrack_sctp.h @@ -9,6 +9,8 @@ struct ip_ct_sctp { enum sctp_conntrack state;
__be32 vtag[IP_CT_DIR_MAX]; + u8 last_dir; + u8 flags; };
#endif /* _NF_CONNTRACK_SCTP_H */ diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c index 4f897b14b6069..810cca24b3990 100644 --- a/net/netfilter/nf_conntrack_proto_sctp.c +++ b/net/netfilter/nf_conntrack_proto_sctp.c @@ -62,6 +62,8 @@ static const unsigned int sctp_timeouts[SCTP_CONNTRACK_MAX] = { [SCTP_CONNTRACK_HEARTBEAT_ACKED] = 210 SECS, };
+#define SCTP_FLAG_HEARTBEAT_VTAG_FAILED 1 + #define sNO SCTP_CONNTRACK_NONE #define sCL SCTP_CONNTRACK_CLOSED #define sCW SCTP_CONNTRACK_COOKIE_WAIT @@ -369,6 +371,7 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct, u_int32_t offset, count; unsigned int *timeouts; unsigned long map[256 / sizeof(unsigned long)] = { 0 }; + bool ignore = false;
if (sctp_error(skb, dataoff, state)) return -NF_ACCEPT; @@ -427,15 +430,39 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct, /* Sec 8.5.1 (D) */ if (sh->vtag != ct->proto.sctp.vtag[dir]) goto out_unlock; - } else if (sch->type == SCTP_CID_HEARTBEAT || - sch->type == SCTP_CID_HEARTBEAT_ACK) { + } else if (sch->type == SCTP_CID_HEARTBEAT) { + if (ct->proto.sctp.vtag[dir] == 0) { + pr_debug("Setting %d vtag %x for dir %d\n", sch->type, sh->vtag, dir); + ct->proto.sctp.vtag[dir] = sh->vtag; + } else if (sh->vtag != ct->proto.sctp.vtag[dir]) { + if (test_bit(SCTP_CID_DATA, map) || ignore) + goto out_unlock; + + ct->proto.sctp.flags |= SCTP_FLAG_HEARTBEAT_VTAG_FAILED; + ct->proto.sctp.last_dir = dir; + ignore = true; + continue; + } else if (ct->proto.sctp.flags & SCTP_FLAG_HEARTBEAT_VTAG_FAILED) { + ct->proto.sctp.flags &= ~SCTP_FLAG_HEARTBEAT_VTAG_FAILED; + } + } else if (sch->type == SCTP_CID_HEARTBEAT_ACK) { if (ct->proto.sctp.vtag[dir] == 0) { pr_debug("Setting vtag %x for dir %d\n", sh->vtag, dir); ct->proto.sctp.vtag[dir] = sh->vtag; } else if (sh->vtag != ct->proto.sctp.vtag[dir]) { - pr_debug("Verification tag check failed\n"); - goto out_unlock; + if (test_bit(SCTP_CID_DATA, map) || ignore) + goto out_unlock; + + if ((ct->proto.sctp.flags & SCTP_FLAG_HEARTBEAT_VTAG_FAILED) == 0 || + ct->proto.sctp.last_dir == dir) + goto out_unlock; + + ct->proto.sctp.flags &= ~SCTP_FLAG_HEARTBEAT_VTAG_FAILED; + ct->proto.sctp.vtag[dir] = sh->vtag; + ct->proto.sctp.vtag[!dir] = 0; + } else if (ct->proto.sctp.flags & SCTP_FLAG_HEARTBEAT_VTAG_FAILED) { + ct->proto.sctp.flags &= ~SCTP_FLAG_HEARTBEAT_VTAG_FAILED; } }
@@ -470,6 +497,10 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct, } spin_unlock_bh(&ct->lock);
+ /* allow but do not refresh timeout */ + if (ignore) + return NF_ACCEPT; + timeouts = nf_ct_timeout_lookup(ct); if (!timeouts) timeouts = nf_sctp_pernet(nf_ct_net(ct))->timeouts;
From: David Howells dhowells@redhat.com
[ Upstream commit 68528d937dcd675e79973061c1a314db598162d1 ]
Keep the ACK serial number in a variable in rxrpc_input_ack() as it's used frequently.
Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/rxrpc/input.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 767579328a069..a7699e56eac88 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -843,7 +843,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) struct rxrpc_ackinfo info; u8 acks[RXRPC_MAXACKS]; } buf; - rxrpc_serial_t acked_serial; + rxrpc_serial_t ack_serial, acked_serial; rxrpc_seq_t first_soft_ack, hard_ack, prev_pkt; int nr_acks, offset, ioffset;
@@ -856,6 +856,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) } offset += sizeof(buf.ack);
+ ack_serial = sp->hdr.serial; acked_serial = ntohl(buf.ack.serial); first_soft_ack = ntohl(buf.ack.firstPacket); prev_pkt = ntohl(buf.ack.previousPacket); @@ -864,31 +865,31 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) summary.ack_reason = (buf.ack.reason < RXRPC_ACK__INVALID ? buf.ack.reason : RXRPC_ACK__INVALID);
- trace_rxrpc_rx_ack(call, sp->hdr.serial, acked_serial, + trace_rxrpc_rx_ack(call, ack_serial, acked_serial, first_soft_ack, prev_pkt, summary.ack_reason, nr_acks);
if (buf.ack.reason == RXRPC_ACK_PING_RESPONSE) rxrpc_input_ping_response(call, skb->tstamp, acked_serial, - sp->hdr.serial); + ack_serial); if (buf.ack.reason == RXRPC_ACK_REQUESTED) rxrpc_input_requested_ack(call, skb->tstamp, acked_serial, - sp->hdr.serial); + ack_serial);
if (buf.ack.reason == RXRPC_ACK_PING) { - _proto("Rx ACK %%%u PING Request", sp->hdr.serial); + _proto("Rx ACK %%%u PING Request", ack_serial); rxrpc_propose_ACK(call, RXRPC_ACK_PING_RESPONSE, - sp->hdr.serial, true, true, + ack_serial, true, true, rxrpc_propose_ack_respond_to_ping); } else if (sp->hdr.flags & RXRPC_REQUEST_ACK) { rxrpc_propose_ACK(call, RXRPC_ACK_REQUESTED, - sp->hdr.serial, true, true, + ack_serial, true, true, rxrpc_propose_ack_respond_to_ack); }
/* Discard any out-of-order or duplicate ACKs (outside lock). */ if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { - trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial, + trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, first_soft_ack, call->ackr_first_seq, prev_pkt, call->ackr_prev_seq); return; @@ -904,7 +905,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
/* Discard any out-of-order or duplicate ACKs (inside lock). */ if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { - trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial, + trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, first_soft_ack, call->ackr_first_seq, prev_pkt, call->ackr_prev_seq); goto out; @@ -964,7 +965,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) RXRPC_TX_ANNO_LAST && summary.nr_acks == call->tx_top - hard_ack && rxrpc_is_client_call(call)) - rxrpc_propose_ACK(call, RXRPC_ACK_PING, sp->hdr.serial, + rxrpc_propose_ACK(call, RXRPC_ACK_PING, ack_serial, false, true, rxrpc_propose_ack_ping_for_lost_reply);
From: Stefano Brivio sbrivio@redhat.com
[ Upstream commit 0726763043dc10dd4c12481f050b1a5ef8f15410 ]
Getting creative with nft and omitting the interval_overlap() check from the set_overlap() function, without omitting set_overlap() altogether, led to the observation of a partial overlap that wasn't detected, and would actually result in replacement of the end element of an existing interval.
This is due to the fact that we'll return -EEXIST on a matching, pre-existing start element, instead of -ENOTEMPTY, and the error is cleared by API if NLM_F_EXCL is not given. At this point, we can insert a matching start, and duplicate the end element as long as we don't end up into other intervals.
For instance, inserting interval 0 - 2 with an existing 0 - 3 interval would result in a single 0 - 2 interval, and a dangling '3' end element. This is because nft will proceed after inserting the '0' start element as no error is reported, and no further conflicting intervals are detected on insertion of the end element.
This needs a different approach as it's a local condition that can be detected by looking for duplicate ends coming from left and right, separately. Track those and directly report -ENOTEMPTY on duplicated end elements for a matching start.
Signed-off-by: Stefano Brivio sbrivio@redhat.com Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nft_set_rbtree.c | 34 +++++++++++++++++++++++++++++++++- 1 file changed, 33 insertions(+), 1 deletion(-)
diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c index b85ce6f0c0a6f..f317ad80cd6bc 100644 --- a/net/netfilter/nft_set_rbtree.c +++ b/net/netfilter/nft_set_rbtree.c @@ -218,11 +218,11 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, struct nft_rbtree_elem *new, struct nft_set_ext **ext) { + bool overlap = false, dup_end_left = false, dup_end_right = false; struct nft_rbtree *priv = nft_set_priv(set); u8 genmask = nft_genmask_next(net); struct nft_rbtree_elem *rbe; struct rb_node *parent, **p; - bool overlap = false; int d;
/* Detect overlaps as we descend the tree. Set the flag in these cases: @@ -262,6 +262,20 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, * * which always happen as last step and imply that no further * overlapping is possible. + * + * Another special case comes from the fact that start elements matching + * an already existing start element are allowed: insertion is not + * performed but we return -EEXIST in that case, and the error will be + * cleared by the caller if NLM_F_EXCL is not present in the request. + * This way, request for insertion of an exact overlap isn't reported as + * error to userspace if not desired. + * + * However, if the existing start matches a pre-existing start, but the + * end element doesn't match the corresponding pre-existing end element, + * we need to report a partial overlap. This is a local condition that + * can be noticed without need for a tracking flag, by checking for a + * local duplicated end for a corresponding start, from left and right, + * separately. */
parent = NULL; @@ -281,19 +295,35 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, !nft_set_elem_expired(&rbe->ext) && !*p) overlap = false; } else { + if (dup_end_left && !*p) + return -ENOTEMPTY; + overlap = nft_rbtree_interval_end(rbe) && nft_set_elem_active(&rbe->ext, genmask) && !nft_set_elem_expired(&rbe->ext); + + if (overlap) { + dup_end_right = true; + continue; + } } } else if (d > 0) { p = &parent->rb_right;
if (nft_rbtree_interval_end(new)) { + if (dup_end_right && !*p) + return -ENOTEMPTY; + overlap = nft_rbtree_interval_end(rbe) && nft_set_elem_active(&rbe->ext, genmask) && !nft_set_elem_expired(&rbe->ext); + + if (overlap) { + dup_end_left = true; + continue; + } } else if (nft_set_elem_active(&rbe->ext, genmask) && !nft_set_elem_expired(&rbe->ext)) { overlap = nft_rbtree_interval_end(rbe); @@ -321,6 +351,8 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, p = &parent->rb_left; } } + + dup_end_left = dup_end_right = false; }
if (overlap)
From: Xie He xie.he.0141@gmail.com
[ Upstream commit 1ee39c1448c4e0d480c5b390e2db1987561fb5c2 ]
The underlying Ethernet device may request necessary tailroom to be allocated by setting needed_tailroom. This driver should also set needed_tailroom to request the tailroom needed by the underlying Ethernet device to be allocated.
Cc: Willem de Bruijn willemdebruijn.kernel@gmail.com Cc: Martin Schiller ms@dev.tdt.de Signed-off-by: Xie He xie.he.0141@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wan/lapbether.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c index 1ea15f2123ed5..cc297ea9c6ece 100644 --- a/drivers/net/wan/lapbether.c +++ b/drivers/net/wan/lapbether.c @@ -340,6 +340,7 @@ static int lapbeth_new_device(struct net_device *dev) */ ndev->needed_headroom = -1 + 3 + 2 + dev->hard_header_len + dev->needed_headroom; + ndev->needed_tailroom = dev->needed_tailroom;
lapbeth = netdev_priv(ndev); lapbeth->axdev = ndev;
From: Dinghao Liu dinghao.liu@zju.edu.cn
[ Upstream commit f97c04c316d8fea16dca449fdfbe101fbdfee6a2 ]
When down_killable() fails, skb_resp should be freed just like when st95hf_spi_send() fails.
Signed-off-by: Dinghao Liu dinghao.liu@zju.edu.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nfc/st95hf/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nfc/st95hf/core.c b/drivers/nfc/st95hf/core.c index 9642971e89cea..4578547659839 100644 --- a/drivers/nfc/st95hf/core.c +++ b/drivers/nfc/st95hf/core.c @@ -966,7 +966,7 @@ static int st95hf_in_send_cmd(struct nfc_digital_dev *ddev, rc = down_killable(&stcontext->exchange_lock); if (rc) { WARN(1, "Semaphore is not found up in st95hf_in_send_cmd\n"); - return rc; + goto free_skb_resp; }
rc = st95hf_spi_send(&stcontext->spicontext, skb->data,
From: Dinghao Liu dinghao.liu@zju.edu.cn
[ Upstream commit 15ac5cdafb9202424206dc5bd376437a358963f9 ]
When make_rate() fails, vcc should be freed just like other error paths in fs_open().
Signed-off-by: Dinghao Liu dinghao.liu@zju.edu.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/atm/firestream.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/atm/firestream.c b/drivers/atm/firestream.c index cc87004d5e2d6..5f22555933df1 100644 --- a/drivers/atm/firestream.c +++ b/drivers/atm/firestream.c @@ -998,6 +998,7 @@ static int fs_open(struct atm_vcc *atm_vcc) error = make_rate (pcr, r, &tmc0, NULL); if (error) { kfree(tc); + kfree(vcc); return error; } }
From: Ye Bin yebin10@huawei.com
[ Upstream commit f308a35f547cd7c1d8a901c12b3ac508e96df665 ]
Link: https://lore.kernel.org/r/20200824033436.45570-1-yebin10@huawei.com Acked-by: Saurav Kashyap skashyap@marvell.com Signed-off-by: Ye Bin yebin10@huawei.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/qedf/qedf_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c index 36b1ca2dadbb5..51cfab9d1afdc 100644 --- a/drivers/scsi/qedf/qedf_main.c +++ b/drivers/scsi/qedf/qedf_main.c @@ -3843,7 +3843,7 @@ void qedf_stag_change_work(struct work_struct *work) container_of(work, struct qedf_ctx, stag_work.work);
if (!qedf) { - QEDF_ERR(&qedf->dbg_ctx, "qedf is NULL"); + QEDF_ERR(NULL, "qedf is NULL"); return; } QEDF_ERR(&qedf->dbg_ctx, "Performing software context reset.\n");
From: Mohan Kumar mkumard@nvidia.com
[ Upstream commit 216116eae43963c662eb84729507bad95214ca6b ]
The Tegra HDA codec HW implementation has an issue related to not swapping the 2 channel Audio Sample Packet(ASP) channel mapping. Whatever the FL and FR mapping specified the left channel always comes out of left speaker and right channel on right speaker. So add condition to disallow the swapping of FL,FR during the playback.
Signed-off-by: Mohan Kumar mkumard@nvidia.com Acked-by: Sameer Pujar spujar@nvidia.com Link: https://lore.kernel.org/r/20200825052415.20626-2-mkumard@nvidia.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_hdmi.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c index f0c6d2907e396..9442127d9b15d 100644 --- a/sound/pci/hda/patch_hdmi.c +++ b/sound/pci/hda/patch_hdmi.c @@ -3670,6 +3670,7 @@ static int tegra_hdmi_build_pcms(struct hda_codec *codec)
static int patch_tegra_hdmi(struct hda_codec *codec) { + struct hdmi_spec *spec; int err;
err = patch_generic_hdmi(codec); @@ -3677,6 +3678,10 @@ static int patch_tegra_hdmi(struct hda_codec *codec) return err;
codec->patch_ops.build_pcms = tegra_hdmi_build_pcms; + spec = codec->spec; + spec->chmap.ops.chmap_cea_alloc_validate_get_type = + nvhdmi_chmap_cea_alloc_validate_get_type; + spec->chmap.ops.chmap_validate = nvhdmi_chmap_validate;
return 0; }
From: Mohan Kumar mkumard@nvidia.com
[ Upstream commit 23d63a31d9f44d7daeac0d1fb65c6a73c70e5216 ]
The WAKEEN bits are used to indicate which bits in the STATESTS register may cause wake event during the codec state change request. Configure the WAKEEN register for the Tegra to detect the wake events.
Signed-off-by: Mohan Kumar mkumard@nvidia.com Acked-by: Sameer Pujar spujar@nvidia.com Link: https://lore.kernel.org/r/20200825052415.20626-3-mkumard@nvidia.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/hda_tegra.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c index 0cc5fad1af8a9..ae40ca3f29837 100644 --- a/sound/pci/hda/hda_tegra.c +++ b/sound/pci/hda/hda_tegra.c @@ -179,6 +179,10 @@ static int __maybe_unused hda_tegra_runtime_suspend(struct device *dev) struct hda_tegra *hda = container_of(chip, struct hda_tegra, chip);
if (chip && chip->running) { + /* enable controller wake up event */ + azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) | + STATESTS_INT_MASK); + azx_stop_chip(chip); azx_enter_link_reset(chip); } @@ -200,6 +204,9 @@ static int __maybe_unused hda_tegra_runtime_resume(struct device *dev) if (chip && chip->running) { hda_tegra_init(hda); azx_init_chip(chip, 1); + /* disable controller wake up event*/ + azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) & + ~STATESTS_INT_MASK); }
return 0;
From: Madhuparna Bhowmik madhuparnabhowmik10@gmail.com
[ Upstream commit 6d6018fc30bee67290dbed2fa51123f7c6f3d691 ]
In probe, IRQ is requested before zchan->id is initialized which can be read in the irq handler. Hence, shift request irq after other initializations complete.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Madhuparna Bhowmik madhuparnabhowmik10@gmail.com Reviewed-by: Paul Cercueil paul@crapouillou.net Link: https://lore.kernel.org/r/20200821034423.12713-1-madhuparnabhowmik10@gmail.c... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/dma-jz4780.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-)
diff --git a/drivers/dma/dma-jz4780.c b/drivers/dma/dma-jz4780.c index 448f663da89c6..8beed91428bd6 100644 --- a/drivers/dma/dma-jz4780.c +++ b/drivers/dma/dma-jz4780.c @@ -879,24 +879,11 @@ static int jz4780_dma_probe(struct platform_device *pdev) return -EINVAL; }
- ret = platform_get_irq(pdev, 0); - if (ret < 0) - return ret; - - jzdma->irq = ret; - - ret = request_irq(jzdma->irq, jz4780_dma_irq_handler, 0, dev_name(dev), - jzdma); - if (ret) { - dev_err(dev, "failed to request IRQ %u!\n", jzdma->irq); - return ret; - } - jzdma->clk = devm_clk_get(dev, NULL); if (IS_ERR(jzdma->clk)) { dev_err(dev, "failed to get clock\n"); ret = PTR_ERR(jzdma->clk); - goto err_free_irq; + return ret; }
clk_prepare_enable(jzdma->clk); @@ -949,10 +936,23 @@ static int jz4780_dma_probe(struct platform_device *pdev) jzchan->vchan.desc_free = jz4780_dma_desc_free; }
+ ret = platform_get_irq(pdev, 0); + if (ret < 0) + goto err_disable_clk; + + jzdma->irq = ret; + + ret = request_irq(jzdma->irq, jz4780_dma_irq_handler, 0, dev_name(dev), + jzdma); + if (ret) { + dev_err(dev, "failed to request IRQ %u!\n", jzdma->irq); + goto err_disable_clk; + } + ret = dmaenginem_async_device_register(dd); if (ret) { dev_err(dev, "failed to register device\n"); - goto err_disable_clk; + goto err_free_irq; }
/* Register with OF DMA helpers. */ @@ -960,17 +960,17 @@ static int jz4780_dma_probe(struct platform_device *pdev) jzdma); if (ret) { dev_err(dev, "failed to register OF DMA controller\n"); - goto err_disable_clk; + goto err_free_irq; }
dev_info(dev, "JZ4780 DMA controller initialised\n"); return 0;
-err_disable_clk: - clk_disable_unprepare(jzdma->clk); - err_free_irq: free_irq(jzdma->irq, jzdma); + +err_disable_clk: + clk_disable_unprepare(jzdma->clk); return ret; }
From: Mingming Cao mmc@linux.vnet.ibm.com
[ Upstream commit 9f13457377907fa253aef560e1a37e1ca4197f9b ]
At the time of do_rest, ibmvnic tries to re-initalize the tx_pools and rx_pools to avoid re-allocating the long term buffer. However there is a window inside do_reset that the tx_pools and rx_pools were freed before re-initialized making it possible to deference null pointers.
This patch fix this issue by always check the tx_pool and rx_pool are not NULL after ibmvnic_login. If so, re-allocating the pools. This will avoid getting into calling reset_tx/rx_pools with NULL adapter tx_pools/rx_pools pointer. Also add null pointer check in reset_tx_pools and reset_rx_pools to safe handle NULL pointer case.
Signed-off-by: Mingming Cao mmc@linux.vnet.ibm.com Signed-off-by: Dany Madden drt@linux.ibm.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/ibm/ibmvnic.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 5afb3c9c52d20..d3a774331afc7 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -479,6 +479,9 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter) int i, j, rc; u64 *size_array;
+ if (!adapter->rx_pool) + return -1; + size_array = (u64 *)((u8 *)(adapter->login_rsp_buf) + be32_to_cpu(adapter->login_rsp_buf->off_rxadd_buff_size));
@@ -649,6 +652,9 @@ static int reset_tx_pools(struct ibmvnic_adapter *adapter) int tx_scrqs; int i, rc;
+ if (!adapter->tx_pool) + return -1; + tx_scrqs = be32_to_cpu(adapter->login_rsp_buf->num_txsubm_subcrqs); for (i = 0; i < tx_scrqs; i++) { rc = reset_one_tx_pool(adapter, &adapter->tso_pool[i]); @@ -2011,7 +2017,10 @@ static int do_reset(struct ibmvnic_adapter *adapter, adapter->req_rx_add_entries_per_subcrq != old_num_rx_slots || adapter->req_tx_entries_per_subcrq != - old_num_tx_slots) { + old_num_tx_slots || + !adapter->rx_pool || + !adapter->tso_pool || + !adapter->tx_pool) { release_rx_pools(adapter); release_tx_pools(adapter); release_napi(adapter); @@ -2024,10 +2033,14 @@ static int do_reset(struct ibmvnic_adapter *adapter, } else { rc = reset_tx_pools(adapter); if (rc) + netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n", + rc); goto out;
rc = reset_rx_pools(adapter); if (rc) + netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n", + rc); goto out; } ibmvnic_disable_irqs(adapter);
On Mon, 7 Sep 2020 12:31:40 -0400 Sasha Levin wrote:
[ Upstream commit 9f13457377907fa253aef560e1a37e1ca4197f9b ]
@@ -2024,10 +2033,14 @@ static int do_reset(struct ibmvnic_adapter *adapter, } else { rc = reset_tx_pools(adapter); if (rc)
netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n",
rc); goto out;
rc = reset_rx_pools(adapter); if (rc)
netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n",
} ibmvnic_disable_irqs(adapter);rc); goto out;
Hi Sasha!
I just pushed this to net:
8ae4dff882eb ("ibmvnic: add missing parenthesis in do_reset()")
You definitely want to pull that in if you decide to backport this one.
On Mon, Sep 07, 2020 at 02:10:26PM -0700, Jakub Kicinski wrote:
On Mon, 7 Sep 2020 12:31:40 -0400 Sasha Levin wrote:
[ Upstream commit 9f13457377907fa253aef560e1a37e1ca4197f9b ]
@@ -2024,10 +2033,14 @@ static int do_reset(struct ibmvnic_adapter *adapter, } else { rc = reset_tx_pools(adapter); if (rc)
netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n",
rc); goto out; rc = reset_rx_pools(adapter); if (rc)
netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n",
rc); goto out;
} ibmvnic_disable_irqs(adapter);
Hi Sasha!
I just pushed this to net:
8ae4dff882eb ("ibmvnic: add missing parenthesis in do_reset()")
You definitely want to pull that in if you decide to backport this one.
Will do, thanks!
From: Yi Li yili@winhong.com
[ Upstream commit a156998fc92d3859c8e820f1583f6d0541d643c3 ]
when skb->encapsulation is 0, skb->ip_summed is CHECKSUM_PARTIAL and it is udp packet, which has a dest port as the IANA assigned. the hardware is expected to do the checksum offload, but the hardware will not do the checksum offload when udp dest port is 6081.
This patch fixes it by doing the checksum in software.
Reported-by: Li Bing libing@winhong.com Signed-off-by: Yi Li yili@winhong.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 71ed4c54f6d5d..eaadcc7043349 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -20,6 +20,7 @@ #include <net/pkt_cls.h> #include <net/tcp.h> #include <net/vxlan.h> +#include <net/geneve.h>
#include "hnae3.h" #include "hns3_enet.h" @@ -780,7 +781,7 @@ static int hns3_get_l4_protocol(struct sk_buff *skb, u8 *ol4_proto, * and it is udp packet, which has a dest port as the IANA assigned. * the hardware is expected to do the checksum offload, but the * hardware will not do the checksum offload when udp dest port is - * 4789. + * 4789 or 6081. */ static bool hns3_tunnel_csum_bug(struct sk_buff *skb) { @@ -789,7 +790,8 @@ static bool hns3_tunnel_csum_bug(struct sk_buff *skb) l4.hdr = skb_transport_header(skb);
if (!(!skb->encapsulation && - l4.udp->dest == htons(IANA_VXLAN_UDP_PORT))) + (l4.udp->dest == htons(IANA_VXLAN_UDP_PORT) || + l4.udp->dest == htons(GENEVE_UDP_PORT)))) return false;
skb_checksum_help(skb);
From: Brian Foster bfoster@redhat.com
[ Upstream commit 657f101930bc6c5b41bd7d6c22565c4302a80d33 ]
The inode chunk allocation transaction reserves inobt_maxlevels-1 blocks to accommodate a full split of the inode btree. A full split requires an allocation for every existing level and a new root block, which means inobt_maxlevels is the worst case block requirement for a transaction that inserts to the inobt. This can lead to a transaction block reservation overrun when tmpfile creation allocates an inode chunk and expands the inobt to its maximum depth. This problem has been observed in conjunction with overlayfs, which makes frequent use of tmpfiles internally.
The existing reservation code goes back as far as the Linux git repo history (v2.6.12). It was likely never observed as a problem because the traditional file/directory creation transactions also include worst case block reservation for directory modifications, which most likely is able to make up for a single block deficiency in the inode allocation portion of the calculation. tmpfile support is relatively more recent (v3.15), less heavily used, and only includes the inode allocation block reservation as tmpfiles aren't linked into the directory tree on creation.
Fix up the inode alloc block reservation macro and a couple of the block allocator minleft parameters that enforce an allocation to leave enough free blocks in the AG for a full inobt split.
Signed-off-by: Brian Foster bfoster@redhat.com Reviewed-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/xfs/libxfs/xfs_ialloc.c | 4 ++-- fs/xfs/libxfs/xfs_trans_space.h | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c index 7fcf62b324b0d..8c1a7cc484b65 100644 --- a/fs/xfs/libxfs/xfs_ialloc.c +++ b/fs/xfs/libxfs/xfs_ialloc.c @@ -688,7 +688,7 @@ xfs_ialloc_ag_alloc( args.minalignslop = igeo->cluster_align - 1;
/* Allow space for the inode btree to split. */ - args.minleft = igeo->inobt_maxlevels - 1; + args.minleft = igeo->inobt_maxlevels; if ((error = xfs_alloc_vextent(&args))) return error;
@@ -736,7 +736,7 @@ xfs_ialloc_ag_alloc( /* * Allow space for the inode btree to split. */ - args.minleft = igeo->inobt_maxlevels - 1; + args.minleft = igeo->inobt_maxlevels; if ((error = xfs_alloc_vextent(&args))) return error; } diff --git a/fs/xfs/libxfs/xfs_trans_space.h b/fs/xfs/libxfs/xfs_trans_space.h index c6df01a2a1585..7ad3659c5d2a9 100644 --- a/fs/xfs/libxfs/xfs_trans_space.h +++ b/fs/xfs/libxfs/xfs_trans_space.h @@ -58,7 +58,7 @@ #define XFS_IALLOC_SPACE_RES(mp) \ (M_IGEO(mp)->ialloc_blks + \ ((xfs_sb_version_hasfinobt(&mp->m_sb) ? 2 : 1) * \ - (M_IGEO(mp)->inobt_maxlevels - 1))) + M_IGEO(mp)->inobt_maxlevels))
/* * Space reservation values for various transactions.
From: Xie He xie.he.0141@gmail.com
[ Upstream commit 91244d108441013b7367b3b4dcc6869998676473 ]
Set the skb's network_header before it is passed to the underlying Ethernet device for transmission.
This patch fixes the following issue:
When we use this driver with AF_PACKET sockets, there would be error messages of: protocol 0805 is buggy, dev (Ethernet interface name) printed in the system "dmesg" log.
This is because skbs passed down to the Ethernet device for transmission don't have their network_header properly set, and the dev_queue_xmit_nit function in net/core/dev.c complains about this.
Reason of setting the network_header to this place (at the end of the Ethernet header, and at the beginning of the Ethernet payload):
Because when this driver receives an skb from the Ethernet device, the network_header is also set at this place.
Cc: Martin Schiller ms@dev.tdt.de Signed-off-by: Xie He xie.he.0141@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wan/lapbether.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c index cc297ea9c6ece..e61616b0b91c7 100644 --- a/drivers/net/wan/lapbether.c +++ b/drivers/net/wan/lapbether.c @@ -210,6 +210,8 @@ static void lapbeth_data_transmit(struct net_device *ndev, struct sk_buff *skb)
skb->dev = dev = lapbeth->ethdev;
+ skb_reset_network_header(skb); + dev_hard_header(skb, dev, ETH_P_DEC, bcast_addr, NULL, 0);
dev_queue_xmit(skb);
From: Shay Bar shay.bar@celeno.com
[ Upstream commit 3579994476b65cb5e272ff0f720a1fd31322e53f ]
Fix cfg80211_chandef_usable(): consider IEEE80211_VHT_CAP_EXT_NSS_BW when verifying 160/80+80 MHz.
Based on: "Table 9-272 — Setting of the Supported Channel Width Set subfield and Extended NSS BW Support subfield at a STA transmitting the VHT Capabilities Information field"
From "Draft P802.11REVmd_D3.0.pdf"
Signed-off-by: Aviad Brikman aviad.brikman@celeno.com Signed-off-by: Shay Bar shay.bar@celeno.com Link: https://lore.kernel.org/r/20200826143139.25976-1-shay.bar@celeno.com [reformat the code a bit and use u32_get_bits()] Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/wireless/chan.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/net/wireless/chan.c b/net/wireless/chan.c index cddf92c5d09ef..7a7cc4ade2b36 100644 --- a/net/wireless/chan.c +++ b/net/wireless/chan.c @@ -10,6 +10,7 @@ */
#include <linux/export.h> +#include <linux/bitfield.h> #include <net/cfg80211.h> #include "core.h" #include "rdev-ops.h" @@ -892,6 +893,7 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy, struct ieee80211_sta_vht_cap *vht_cap; struct ieee80211_edmg *edmg_cap; u32 width, control_freq, cap; + bool support_80_80 = false;
if (WARN_ON(!cfg80211_chandef_valid(chandef))) return false; @@ -944,9 +946,13 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy, return false; break; case NL80211_CHAN_WIDTH_80P80: - cap = vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK; - if (chandef->chan->band != NL80211_BAND_6GHZ && - cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) + cap = vht_cap->cap; + support_80_80 = + (cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) || + (cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ && + cap & IEEE80211_VHT_CAP_EXT_NSS_BW_MASK) || + u32_get_bits(cap, IEEE80211_VHT_CAP_EXT_NSS_BW_MASK) > 1; + if (chandef->chan->band != NL80211_BAND_6GHZ && !support_80_80) return false; /* fall through */ case NL80211_CHAN_WIDTH_80: @@ -966,7 +972,8 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy, return false; cap = vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK; if (cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ && - cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) + cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ && + !(vht_cap->cap & IEEE80211_VHT_CAP_EXT_NSS_BW_MASK)) return false; break; default:
From: Johannes Berg johannes.berg@intel.com
[ Upstream commit 47caf685a6854593348f216e0b489b71c10cbe03 ]
Reject invalid hints early in order to not cause a kernel WARN later if they're restored to or similar.
Reported-by: syzbot+d451401ffd00a60677ee@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=d451401ffd00a60677ee Link: https://lore.kernel.org/r/20200819084648.13956-1-johannes@sipsolutions.net Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/wireless/reg.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/net/wireless/reg.c b/net/wireless/reg.c index 0d74a31ef0ab4..fc2af2c8b6d54 100644 --- a/net/wireless/reg.c +++ b/net/wireless/reg.c @@ -2944,6 +2944,9 @@ int regulatory_hint_user(const char *alpha2, if (WARN_ON(!alpha2)) return -EINVAL;
+ if (!is_world_regdom(alpha2) && !is_an_alpha2(alpha2)) + return -EINVAL; + request = kzalloc(sizeof(struct regulatory_request), GFP_KERNEL); if (!request) return -ENOMEM;
From: Felix Fietkau nbd@nbd.name
[ Upstream commit 47df8e059b49a80c179fae39256bcd7096810934 ]
When running a large number of packets per second with a high data rate and long A-MPDUs, the packet loss threshold can be reached very quickly when the link conditions change. This frequently shows up as spurious disconnects. Mitigate false positives by using a similar logic for regular stations as the one being used for TDLS, though with a more aggressive timeout. Packet loss events are only reported if no ACK was received for a second.
Signed-off-by: Felix Fietkau nbd@nbd.name Link: https://lore.kernel.org/r/20200808172542.41628-1-nbd@nbd.name Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/sta_info.h | 4 ++-- net/mac80211/status.c | 31 +++++++++++++++---------------- 2 files changed, 17 insertions(+), 18 deletions(-)
diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h index 49728047dfad6..f66fcce8e6a45 100644 --- a/net/mac80211/sta_info.h +++ b/net/mac80211/sta_info.h @@ -522,7 +522,7 @@ struct ieee80211_sta_rx_stats { * @status_stats.retry_failed: # of frames that failed after retry * @status_stats.retry_count: # of retries attempted * @status_stats.lost_packets: # of lost packets - * @status_stats.last_tdls_pkt_time: timestamp of last TDLS packet + * @status_stats.last_pkt_time: timestamp of last ACKed packet * @status_stats.msdu_retries: # of MSDU retries * @status_stats.msdu_failed: # of failed MSDUs * @status_stats.last_ack: last ack timestamp (jiffies) @@ -595,7 +595,7 @@ struct sta_info { unsigned long filtered; unsigned long retry_failed, retry_count; unsigned int lost_packets; - unsigned long last_tdls_pkt_time; + unsigned long last_pkt_time; u64 msdu_retries[IEEE80211_NUM_TIDS + 1]; u64 msdu_failed[IEEE80211_NUM_TIDS + 1]; unsigned long last_ack; diff --git a/net/mac80211/status.c b/net/mac80211/status.c index cbc40b358ba26..819c4221c284e 100644 --- a/net/mac80211/status.c +++ b/net/mac80211/status.c @@ -755,12 +755,16 @@ static void ieee80211_report_used_skb(struct ieee80211_local *local, * - current throughput (higher value for higher tpt)? */ #define STA_LOST_PKT_THRESHOLD 50 +#define STA_LOST_PKT_TIME HZ /* 1 sec since last ACK */ #define STA_LOST_TDLS_PKT_THRESHOLD 10 #define STA_LOST_TDLS_PKT_TIME (10*HZ) /* 10secs since last ACK */
static void ieee80211_lost_packet(struct sta_info *sta, struct ieee80211_tx_info *info) { + unsigned long pkt_time = STA_LOST_PKT_TIME; + unsigned int pkt_thr = STA_LOST_PKT_THRESHOLD; + /* If driver relies on its own algorithm for station kickout, skip * mac80211 packet loss mechanism. */ @@ -773,21 +777,20 @@ static void ieee80211_lost_packet(struct sta_info *sta, return;
sta->status_stats.lost_packets++; - if (!sta->sta.tdls && - sta->status_stats.lost_packets < STA_LOST_PKT_THRESHOLD) - return; + if (sta->sta.tdls) { + pkt_time = STA_LOST_TDLS_PKT_TIME; + pkt_thr = STA_LOST_PKT_THRESHOLD; + }
/* * If we're in TDLS mode, make sure that all STA_LOST_TDLS_PKT_THRESHOLD * of the last packets were lost, and that no ACK was received in the * last STA_LOST_TDLS_PKT_TIME ms, before triggering the CQM packet-loss * mechanism. + * For non-TDLS, use STA_LOST_PKT_THRESHOLD and STA_LOST_PKT_TIME */ - if (sta->sta.tdls && - (sta->status_stats.lost_packets < STA_LOST_TDLS_PKT_THRESHOLD || - time_before(jiffies, - sta->status_stats.last_tdls_pkt_time + - STA_LOST_TDLS_PKT_TIME))) + if (sta->status_stats.lost_packets < pkt_thr || + !time_after(jiffies, sta->status_stats.last_pkt_time + pkt_time)) return;
cfg80211_cqm_pktloss_notify(sta->sdata->dev, sta->sta.addr, @@ -1035,9 +1038,7 @@ static void __ieee80211_tx_status(struct ieee80211_hw *hw, sta->status_stats.lost_packets = 0;
/* Track when last TDLS packet was ACKed */ - if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) - sta->status_stats.last_tdls_pkt_time = - jiffies; + sta->status_stats.last_pkt_time = jiffies; } else if (noack_success) { /* nothing to do here, do not account as lost */ } else { @@ -1170,9 +1171,8 @@ void ieee80211_tx_status_ext(struct ieee80211_hw *hw, if (sta->status_stats.lost_packets) sta->status_stats.lost_packets = 0;
- /* Track when last TDLS packet was ACKed */ - if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) - sta->status_stats.last_tdls_pkt_time = jiffies; + /* Track when last packet was ACKed */ + sta->status_stats.last_pkt_time = jiffies; } else if (test_sta_flag(sta, WLAN_STA_PS_STA)) { return; } else if (noack_success) { @@ -1261,8 +1261,7 @@ void ieee80211_tx_status_8023(struct ieee80211_hw *hw, if (sta->status_stats.lost_packets) sta->status_stats.lost_packets = 0;
- if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) - sta->status_stats.last_tdls_pkt_time = jiffies; + sta->status_stats.last_pkt_time = jiffies; } else { ieee80211_lost_packet(sta, info); }
From: Amar Singhal asinghal@codeaurora.org
[ Upstream commit 2d9b55508556ccee6410310fb9ea2482fd3328eb ]
Adjust the 6 GHz frequency to channel conversion function, the other way around was previously handled.
Signed-off-by: Amar Singhal asinghal@codeaurora.org Link: https://lore.kernel.org/r/1592599921-10607-1-git-send-email-asinghal@codeaur... [rewrite commit message, hard-code channel 2] Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/wireless/util.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/net/wireless/util.c b/net/wireless/util.c index 4d3b76f94f55e..a72d2ad6ade8b 100644 --- a/net/wireless/util.c +++ b/net/wireless/util.c @@ -121,11 +121,13 @@ int ieee80211_freq_khz_to_channel(u32 freq) return (freq - 2407) / 5; else if (freq >= 4910 && freq <= 4980) return (freq - 4000) / 5; - else if (freq < 5945) + else if (freq < 5925) return (freq - 5000) / 5; + else if (freq == 5935) + return 2; else if (freq <= 45000) /* DMG band lower limit */ - /* see 802.11ax D4.1 27.3.22.2 */ - return (freq - 5940) / 5; + /* see 802.11ax D6.1 27.3.22.2 */ + return (freq - 5950) / 5; else if (freq >= 58320 && freq <= 70200) return (freq - 56160) / 2160; else
From: Himadri Pandya himadrispandya@gmail.com
[ Upstream commit a092b7233f0e000cc6f2c71a49e2ecc6f917a5fc ]
The buffer size is 2 Bytes and we expect to receive the same amount of data. But sometimes we receive less data and run into uninit-was-stored issue upon read. Hence modify the error check on the return value to match with the buffer size as a prevention.
Reported-and-tested by: syzbot+a7e220df5a81d1ab400e@syzkaller.appspotmail.com Signed-off-by: Himadri Pandya himadrispandya@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/usb/asix_common.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/usb/asix_common.c b/drivers/net/usb/asix_common.c index e39f41efda3ec..7bc6e8f856fe0 100644 --- a/drivers/net/usb/asix_common.c +++ b/drivers/net/usb/asix_common.c @@ -296,7 +296,7 @@ int asix_read_phy_addr(struct usbnet *dev, int internal)
netdev_dbg(dev->net, "asix_get_phy_addr()\n");
- if (ret < 0) { + if (ret < 2) { netdev_err(dev->net, "Error reading PHYID register: %02x\n", ret); goto out; }
From: "Darrick J. Wong" darrick.wong@oracle.com
[ Upstream commit 125eac243806e021f33a1fdea3687eccbb9f7636 ]
Don't leak kernel memory contents into the shortform attr fork.
Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Reviewed-by: Eric Sandeen sandeen@redhat.com Reviewed-by: Dave Chinner dchinner@redhat.com Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- fs/xfs/libxfs/xfs_attr_leaf.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c index 2f7e89e4be3e3..0ab307bebedee 100644 --- a/fs/xfs/libxfs/xfs_attr_leaf.c +++ b/fs/xfs/libxfs/xfs_attr_leaf.c @@ -653,8 +653,8 @@ xfs_attr_shortform_create( ASSERT(ifp->if_flags & XFS_IFINLINE); } xfs_idata_realloc(dp, sizeof(*hdr), XFS_ATTR_FORK); - hdr = (xfs_attr_sf_hdr_t *)ifp->if_u1.if_data; - hdr->count = 0; + hdr = (struct xfs_attr_sf_hdr *)ifp->if_u1.if_data; + memset(hdr, 0, sizeof(*hdr)); hdr->totsize = cpu_to_be16(sizeof(*hdr)); xfs_trans_log_inode(args->trans, dp, XFS_ILOG_CORE | XFS_ILOG_ADATA); }
From: Vineet Gupta vgupta@synopsys.com
[ Upstream commit e5c388b4b967037a0e00b60194b0dbcf94881a9b ]
when working on ARC64, spotted an issue in ARCv2 reg file printing. print_reg_file() assumes contiguous reg-file whereas in ARCv2 they are not: r12 comes before r0-r11 due to hardware auto-save. Apparently this issue has been present since v2 port submission.
To avoid bolting hacks for this discontinuity while looping through pt_regs, just ditching the loop and print pt_regs directly.
Signed-off-by: Vineet Gupta vgupta@synopsys.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arc/kernel/troubleshoot.c | 77 +++++++++++++--------------------- 1 file changed, 30 insertions(+), 47 deletions(-)
diff --git a/arch/arc/kernel/troubleshoot.c b/arch/arc/kernel/troubleshoot.c index 28e8bf04b253f..a331bb5d8319f 100644 --- a/arch/arc/kernel/troubleshoot.c +++ b/arch/arc/kernel/troubleshoot.c @@ -18,44 +18,37 @@
#define ARC_PATH_MAX 256
-/* - * Common routine to print scratch regs (r0-r12) or callee regs (r13-r25) - * -Prints 3 regs per line and a CR. - * -To continue, callee regs right after scratch, special handling of CR - */ -static noinline void print_reg_file(long *reg_rev, int start_num) +static noinline void print_regs_scratch(struct pt_regs *regs) { - unsigned int i; - char buf[512]; - int n = 0, len = sizeof(buf); - - for (i = start_num; i < start_num + 13; i++) { - n += scnprintf(buf + n, len - n, "r%02u: 0x%08lx\t", - i, (unsigned long)*reg_rev); - - if (((i + 1) % 3) == 0) - n += scnprintf(buf + n, len - n, "\n"); - - /* because pt_regs has regs reversed: r12..r0, r25..r13 */ - if (is_isa_arcv2() && start_num == 0) - reg_rev++; - else - reg_rev--; - } - - if (start_num != 0) - n += scnprintf(buf + n, len - n, "\n\n"); + pr_cont("BTA: 0x%08lx\n SP: 0x%08lx FP: 0x%08lx BLK: %pS\n", + regs->bta, regs->sp, regs->fp, (void *)regs->blink); + pr_cont("LPS: 0x%08lx\tLPE: 0x%08lx\tLPC: 0x%08lx\n", + regs->lp_start, regs->lp_end, regs->lp_count);
- /* To continue printing callee regs on same line as scratch regs */ - if (start_num == 0) - pr_info("%s", buf); - else - pr_cont("%s\n", buf); + pr_info("r00: 0x%08lx\tr01: 0x%08lx\tr02: 0x%08lx\n" \ + "r03: 0x%08lx\tr04: 0x%08lx\tr05: 0x%08lx\n" \ + "r06: 0x%08lx\tr07: 0x%08lx\tr08: 0x%08lx\n" \ + "r09: 0x%08lx\tr10: 0x%08lx\tr11: 0x%08lx\n" \ + "r12: 0x%08lx\t", + regs->r0, regs->r1, regs->r2, + regs->r3, regs->r4, regs->r5, + regs->r6, regs->r7, regs->r8, + regs->r9, regs->r10, regs->r11, + regs->r12); }
-static void show_callee_regs(struct callee_regs *cregs) +static void print_regs_callee(struct callee_regs *regs) { - print_reg_file(&(cregs->r13), 13); + pr_cont("r13: 0x%08lx\tr14: 0x%08lx\n" \ + "r15: 0x%08lx\tr16: 0x%08lx\tr17: 0x%08lx\n" \ + "r18: 0x%08lx\tr19: 0x%08lx\tr20: 0x%08lx\n" \ + "r21: 0x%08lx\tr22: 0x%08lx\tr23: 0x%08lx\n" \ + "r24: 0x%08lx\tr25: 0x%08lx\n", + regs->r13, regs->r14, + regs->r15, regs->r16, regs->r17, + regs->r18, regs->r19, regs->r20, + regs->r21, regs->r22, regs->r23, + regs->r24, regs->r25); }
static void print_task_path_n_nm(struct task_struct *tsk) @@ -175,7 +168,7 @@ static void show_ecr_verbose(struct pt_regs *regs) void show_regs(struct pt_regs *regs) { struct task_struct *tsk = current; - struct callee_regs *cregs; + struct callee_regs *cregs = (struct callee_regs *)tsk->thread.callee_reg;
/* * generic code calls us with preemption disabled, but some calls @@ -204,25 +197,15 @@ void show_regs(struct pt_regs *regs) STS_BIT(regs, A2), STS_BIT(regs, A1), STS_BIT(regs, E2), STS_BIT(regs, E1)); #else - pr_cont(" [%2s%2s%2s%2s]", + pr_cont(" [%2s%2s%2s%2s] ", STS_BIT(regs, IE), (regs->status32 & STATUS_U_MASK) ? "U " : "K ", STS_BIT(regs, DE), STS_BIT(regs, AE)); #endif - pr_cont(" BTA: 0x%08lx\n SP: 0x%08lx FP: 0x%08lx BLK: %pS\n", - regs->bta, regs->sp, regs->fp, (void *)regs->blink); - pr_info("LPS: 0x%08lx\tLPE: 0x%08lx\tLPC: 0x%08lx\n", - regs->lp_start, regs->lp_end, regs->lp_count); - - /* print regs->r0 thru regs->r12 - * Sequential printing was generating horrible code - */ - print_reg_file(&(regs->r0), 0);
- /* If Callee regs were saved, display them too */ - cregs = (struct callee_regs *)current->thread.callee_reg; + print_regs_scratch(regs); if (cregs) - show_callee_regs(cregs); + print_regs_callee(cregs);
preempt_disable(); }
From: Vineet Gupta vgupta@synopsys.com
[ Upstream commit 89d29997f103d08264b0685796b420d911658b96 ]
eznps driver is supposed to be platform independent however it ends up including stuff from inside arch/arc headers leading to rand config build errors.
The quick hack to fix this (proper fix is too much chrun for non active user-base) is to add following to nps platform agnostic header. - copy AUX_IENABLE from arch/arc header - move CTOP_AUX_IACK from arch/arc/plat-eznps/*/**
Reported-by: kernel test robot lkp@intel.com Reported-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Link: https://lkml.kernel.org/r/20200824095831.5lpkmkafelnvlpi2@linutronix.de Signed-off-by: Vineet Gupta vgupta@synopsys.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arc/plat-eznps/include/plat/ctop.h | 1 - include/soc/nps/common.h | 6 ++++++ 2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arc/plat-eznps/include/plat/ctop.h b/arch/arc/plat-eznps/include/plat/ctop.h index a4a61531c7fb9..77712c5ffe848 100644 --- a/arch/arc/plat-eznps/include/plat/ctop.h +++ b/arch/arc/plat-eznps/include/plat/ctop.h @@ -33,7 +33,6 @@ #define CTOP_AUX_DPC (CTOP_AUX_BASE + 0x02C) #define CTOP_AUX_LPC (CTOP_AUX_BASE + 0x030) #define CTOP_AUX_EFLAGS (CTOP_AUX_BASE + 0x080) -#define CTOP_AUX_IACK (CTOP_AUX_BASE + 0x088) #define CTOP_AUX_GPA1 (CTOP_AUX_BASE + 0x08C) #define CTOP_AUX_UDMC (CTOP_AUX_BASE + 0x300)
diff --git a/include/soc/nps/common.h b/include/soc/nps/common.h index 9b1d43d671a3f..8c18dc6d3fde5 100644 --- a/include/soc/nps/common.h +++ b/include/soc/nps/common.h @@ -45,6 +45,12 @@ #define CTOP_INST_MOV2B_FLIP_R3_B1_B2_INST 0x5B60 #define CTOP_INST_MOV2B_FLIP_R3_B1_B2_LIMM 0x00010422
+#ifndef AUX_IENABLE +#define AUX_IENABLE 0x40c +#endif + +#define CTOP_AUX_IACK (0xFFFFF800 + 0x088) + #ifndef __ASSEMBLY__
/* In order to increase compilation test coverage */
From: Sean Young sean@mess.org
[ Upstream commit 1451b93223bbe3b4e9c91fca6b451d00667c5bf0 ]
During bit-banging the IR on a gpio pin, we cannot be scheduled or have anything interrupt us, else the generated signal will be incorrect. Therefore, we need to disable interrupts on the local cpu. This also disables preemption.
local_irq_disable() does exactly what we need and does not require a spinlock.
Signed-off-by: Sean Young sean@mess.org Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/rc/gpio-ir-tx.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-)
diff --git a/drivers/media/rc/gpio-ir-tx.c b/drivers/media/rc/gpio-ir-tx.c index f33b443bfa47b..c6cd2e6d8e654 100644 --- a/drivers/media/rc/gpio-ir-tx.c +++ b/drivers/media/rc/gpio-ir-tx.c @@ -19,8 +19,6 @@ struct gpio_ir { struct gpio_desc *gpio; unsigned int carrier; unsigned int duty_cycle; - /* we need a spinlock to hold the cpu while transmitting */ - spinlock_t lock; };
static const struct of_device_id gpio_ir_tx_of_match[] = { @@ -53,12 +51,11 @@ static int gpio_ir_tx_set_carrier(struct rc_dev *dev, u32 carrier) static void gpio_ir_tx_unmodulated(struct gpio_ir *gpio_ir, uint *txbuf, uint count) { - unsigned long flags; ktime_t edge; s32 delta; int i;
- spin_lock_irqsave(&gpio_ir->lock, flags); + local_irq_disable();
edge = ktime_get();
@@ -72,14 +69,11 @@ static void gpio_ir_tx_unmodulated(struct gpio_ir *gpio_ir, uint *txbuf, }
gpiod_set_value(gpio_ir->gpio, 0); - - spin_unlock_irqrestore(&gpio_ir->lock, flags); }
static void gpio_ir_tx_modulated(struct gpio_ir *gpio_ir, uint *txbuf, uint count) { - unsigned long flags; ktime_t edge; /* * delta should never exceed 0.5 seconds (IR_MAX_DURATION) and on @@ -95,7 +89,7 @@ static void gpio_ir_tx_modulated(struct gpio_ir *gpio_ir, uint *txbuf, space = DIV_ROUND_CLOSEST((100 - gpio_ir->duty_cycle) * (NSEC_PER_SEC / 100), gpio_ir->carrier);
- spin_lock_irqsave(&gpio_ir->lock, flags); + local_irq_disable();
edge = ktime_get();
@@ -128,19 +122,20 @@ static void gpio_ir_tx_modulated(struct gpio_ir *gpio_ir, uint *txbuf, edge = last; } } - - spin_unlock_irqrestore(&gpio_ir->lock, flags); }
static int gpio_ir_tx(struct rc_dev *dev, unsigned int *txbuf, unsigned int count) { struct gpio_ir *gpio_ir = dev->priv; + unsigned long flags;
+ local_irq_save(flags); if (gpio_ir->carrier) gpio_ir_tx_modulated(gpio_ir, txbuf, count); else gpio_ir_tx_unmodulated(gpio_ir, txbuf, count); + local_irq_restore(flags);
return count; } @@ -176,7 +171,6 @@ static int gpio_ir_tx_probe(struct platform_device *pdev)
gpio_ir->carrier = 38000; gpio_ir->duty_cycle = 50; - spin_lock_init(&gpio_ir->lock);
rc = devm_rc_register_device(&pdev->dev, rcdev); if (rc < 0)
From: Ziye Yang ziye.yang@intel.com
[ Upstream commit a6ce7d7b4adaebc27ee7e78e5ecc378a1cfc221d ]
When handling commands without in-capsule data, we assign the ttag assuming we already have the queue commands array allocated (based on the queue size information in the connect data payload). However if the connect itself did not send the connect data in-capsule we have yet to allocate the queue commands,and we will assign a bogus ttag and suffer a NULL dereference when we receive the corresponding h2cdata pdu.
Fix this by checking if we already allocated commands before dereferencing it when handling h2cdata, if we didn't, its for sure a connect and we should use the preallocated connect command.
Signed-off-by: Ziye Yang ziye.yang@intel.com Signed-off-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/target/tcp.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index de9217cfd22d7..3d29b773ced27 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -160,6 +160,11 @@ static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd); static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue, struct nvmet_tcp_cmd *cmd) { + if (unlikely(!queue->nr_cmds)) { + /* We didn't allocate cmds yet, send 0xffff */ + return USHRT_MAX; + } + return cmd - queue->cmds; }
@@ -872,7 +877,10 @@ static int nvmet_tcp_handle_h2c_data_pdu(struct nvmet_tcp_queue *queue) struct nvme_tcp_data_pdu *data = &queue->pdu.data; struct nvmet_tcp_cmd *cmd;
- cmd = &queue->cmds[data->ttag]; + if (likely(queue->nr_cmds)) + cmd = &queue->cmds[data->ttag]; + else + cmd = &queue->connect;
if (le32_to_cpu(data->data_offset) != cmd->rbytes_done) { pr_err("ttag %u unexpected data offset %u (expected %u)\n",
From: Sagi Grimberg sagi@grimberg.me
[ Upstream commit d7144f5c4cf4de95fdc3422943cf51c06aeaf7a7 ]
NVME_CTRL_NEW should never see any I/O, because in order to start initialization it has to transition to NVME_CTRL_CONNECTING and from there it will never return to this state.
Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/fabrics.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index 4ec4829d62334..32f61fc5f4c57 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -576,7 +576,6 @@ bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, * which is require to set the queue live in the appropinquate states. */ switch (ctrl->state) { - case NVME_CTRL_NEW: case NVME_CTRL_CONNECTING: if (nvme_is_fabrics(req->cmd) && req->cmd->fabrics.fctype == nvme_fabrics_type_connect)
From: Sagi Grimberg sagi@grimberg.me
[ Upstream commit 7cf0d7c0f3c3b0203aaf81c1bc884924d8fdb9bd ]
Users can detect if the wait has completed or not and take appropriate actions based on this information (e.g. weather to continue initialization or rather fail and schedule another initialization attempt).
Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/core.c | 3 ++- drivers/nvme/host/nvme.h | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index f38548e6d55ec..3770020dd8492 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -4287,7 +4287,7 @@ void nvme_unfreeze(struct nvme_ctrl *ctrl) } EXPORT_SYMBOL_GPL(nvme_unfreeze);
-void nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout) +int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout) { struct nvme_ns *ns;
@@ -4298,6 +4298,7 @@ void nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout) break; } up_read(&ctrl->namespaces_rwsem); + return timeout; } EXPORT_SYMBOL_GPL(nvme_wait_freeze_timeout);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index e268f1d7e1a0f..c053aedbc55db 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -538,7 +538,7 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl); void nvme_sync_queues(struct nvme_ctrl *ctrl); void nvme_unfreeze(struct nvme_ctrl *ctrl); void nvme_wait_freeze(struct nvme_ctrl *ctrl); -void nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout); +int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout); void nvme_start_freeze(struct nvme_ctrl *ctrl);
#define NVME_QID_ANY -1
From: Sagi Grimberg sagi@grimberg.me
[ Upstream commit d4d61470ae48838f49e668503e840e1520b97162 ]
In the timeout handler we may need to complete a request because the request that timed out may be an I/O that is a part of a serial sequence of controller teardown or initialization. In order to complete the request, we need to fence any other context that may compete with us and complete the request that is timing out.
In this case, we could have a potential double completion in case a hard-irq or a different competing context triggered error recovery and is running inflight request cancellation concurrently with the timeout handler.
Protect using a ctrl teardown_lock to serialize contexts that may complete a cancelled request due to error recovery or a reset.
Signed-off-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/tcp.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index a6d2e3330a584..d1e5b27675b1b 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -122,6 +122,7 @@ struct nvme_tcp_ctrl { struct sockaddr_storage src_addr; struct nvme_ctrl ctrl;
+ struct mutex teardown_lock; struct work_struct err_work; struct delayed_work connect_work; struct nvme_tcp_request async_req; @@ -1497,7 +1498,6 @@ static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid)
if (!test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags)) return; - __nvme_tcp_stop_queue(queue); }
@@ -1845,6 +1845,7 @@ static int nvme_tcp_configure_admin_queue(struct nvme_ctrl *ctrl, bool new) static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl, bool remove) { + mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock); blk_mq_quiesce_queue(ctrl->admin_q); nvme_tcp_stop_queue(ctrl, 0); if (ctrl->admin_tagset) { @@ -1855,13 +1856,16 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl, if (remove) blk_mq_unquiesce_queue(ctrl->admin_q); nvme_tcp_destroy_admin_queue(ctrl, remove); + mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock); }
static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl, bool remove) { + mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock); if (ctrl->queue_count <= 1) - return; + goto out; + blk_mq_quiesce_queue(ctrl->admin_q); nvme_start_freeze(ctrl); nvme_stop_queues(ctrl); nvme_tcp_stop_io_queues(ctrl); @@ -1873,6 +1877,8 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl, if (remove) nvme_start_queues(ctrl); nvme_tcp_destroy_io_queues(ctrl, remove); +out: + mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock); }
static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl) @@ -2384,6 +2390,7 @@ static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev, nvme_tcp_reconnect_ctrl_work); INIT_WORK(&ctrl->err_work, nvme_tcp_error_recovery_work); INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work); + mutex_init(&ctrl->teardown_lock);
if (!(opts->mask & NVMF_OPT_TRSVCID)) { opts->trsvcid =
From: Sagi Grimberg sagi@grimberg.me
[ Upstream commit 236187c4ed195161dfa4237c7beffbba0c5ae45b ]
When a request times out in a LIVE state, we simply trigger error recovery and let the error recovery handle the request cancellation, however when a request times out in a non LIVE state, we make sure to complete it immediately as it might block controller setup or teardown and prevent forward progress.
However tearing down the entire set of I/O and admin queues causes freeze/unfreeze imbalance (q->mq_freeze_depth) because and is really an overkill to what we actually need, which is to just fence controller teardown that may be running, stop the queue, and cancel the request if it is not already completed.
Now that we have the controller teardown_lock, we can safely serialize request cancellation. This addresses a hang caused by calling extra queue freeze on controller namespaces, causing unfreeze to not complete correctly.
Signed-off-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/tcp.c | 56 ++++++++++++++++++++++++++--------------- 1 file changed, 36 insertions(+), 20 deletions(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index d1e5b27675b1b..65a3bad778104 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -448,6 +448,7 @@ static void nvme_tcp_error_recovery(struct nvme_ctrl *ctrl) if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING)) return;
+ dev_warn(ctrl->device, "starting error recovery\n"); queue_work(nvme_reset_wq, &to_tcp_ctrl(ctrl)->err_work); }
@@ -2125,40 +2126,55 @@ static void nvme_tcp_submit_async_event(struct nvme_ctrl *arg) nvme_tcp_queue_request(&ctrl->async_req, true); }
+static void nvme_tcp_complete_timed_out(struct request *rq) +{ + struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); + struct nvme_ctrl *ctrl = &req->queue->ctrl->ctrl; + + /* fence other contexts that may complete the command */ + mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock); + nvme_tcp_stop_queue(ctrl, nvme_tcp_queue_id(req->queue)); + if (!blk_mq_request_completed(rq)) { + nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD; + blk_mq_complete_request(rq); + } + mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock); +} + static enum blk_eh_timer_return nvme_tcp_timeout(struct request *rq, bool reserved) { struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); - struct nvme_tcp_ctrl *ctrl = req->queue->ctrl; + struct nvme_ctrl *ctrl = &req->queue->ctrl->ctrl; struct nvme_tcp_cmd_pdu *pdu = req->pdu;
- /* - * Restart the timer if a controller reset is already scheduled. Any - * timed out commands would be handled before entering the connecting - * state. - */ - if (ctrl->ctrl.state == NVME_CTRL_RESETTING) - return BLK_EH_RESET_TIMER; - - dev_warn(ctrl->ctrl.device, + dev_warn(ctrl->device, "queue %d: timeout request %#x type %d\n", nvme_tcp_queue_id(req->queue), rq->tag, pdu->hdr.type);
- if (ctrl->ctrl.state != NVME_CTRL_LIVE) { + if (ctrl->state != NVME_CTRL_LIVE) { /* - * Teardown immediately if controller times out while starting - * or we are already started error recovery. all outstanding - * requests are completed on shutdown, so we return BLK_EH_DONE. + * If we are resetting, connecting or deleting we should + * complete immediately because we may block controller + * teardown or setup sequence + * - ctrl disable/shutdown fabrics requests + * - connect requests + * - initialization admin requests + * - I/O requests that entered after unquiescing and + * the controller stopped responding + * + * All other requests should be cancelled by the error + * recovery work, so it's fine that we fail it here. */ - flush_work(&ctrl->err_work); - nvme_tcp_teardown_io_queues(&ctrl->ctrl, false); - nvme_tcp_teardown_admin_queue(&ctrl->ctrl, false); + nvme_tcp_complete_timed_out(rq); return BLK_EH_DONE; }
- dev_warn(ctrl->ctrl.device, "starting error recovery\n"); - nvme_tcp_error_recovery(&ctrl->ctrl); - + /* + * LIVE state should trigger the normal error recovery which will + * handle completing this request. + */ + nvme_tcp_error_recovery(ctrl); return BLK_EH_RESET_TIMER; }
From: Sagi Grimberg sagi@grimberg.me
[ Upstream commit e5c01f4f7f623e768e868bcc08d8e7ceb03b75d0 ]
If the controller becomes unresponsive in the middle of a reset, we will hang because we are waiting for the freeze to complete, but that cannot happen since we have commands that are inflight holding the q_usage_counter, and we can't blindly fail requests that times out.
So give a timeout and if we cannot wait for queue freeze before unfreezing, fail and have the error handling take care how to proceed (either schedule a reconnect of remove the controller).
Signed-off-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/tcp.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 65a3bad778104..f1f66bf96cbb9 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1753,7 +1753,15 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
if (!new) { nvme_start_queues(ctrl); - nvme_wait_freeze(ctrl); + if (!nvme_wait_freeze_timeout(ctrl, NVME_IO_TIMEOUT)) { + /* + * If we timed out waiting for freeze we are likely to + * be stuck. Fail the controller initialization just + * to be safe. + */ + ret = -ENODEV; + goto out_wait_freeze_timed_out; + } blk_mq_update_nr_hw_queues(ctrl->tagset, ctrl->queue_count - 1); nvme_unfreeze(ctrl); @@ -1761,6 +1769,9 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
return 0;
+out_wait_freeze_timed_out: + nvme_stop_queues(ctrl); + nvme_tcp_stop_io_queues(ctrl); out_cleanup_connect_q: if (new) blk_cleanup_queue(ctrl->connect_q);
From: Sagi Grimberg sagi@grimberg.me
[ Upstream commit 5110f40241d08334375eb0495f174b1d2c07657e ]
In the timeout handler we may need to complete a request because the request that timed out may be an I/O that is a part of a serial sequence of controller teardown or initialization. In order to complete the request, we need to fence any other context that may compete with us and complete the request that is timing out.
In this case, we could have a potential double completion in case a hard-irq or a different competing context triggered error recovery and is running inflight request cancellation concurrently with the timeout handler.
Protect using a ctrl teardown_lock to serialize contexts that may complete a cancelled request due to error recovery or a reset.
Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: James Smart james.smart@broadcom.com Signed-off-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/rdma.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 876859cd14e86..ffe83d1f576bf 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -121,6 +121,7 @@ struct nvme_rdma_ctrl { struct sockaddr_storage src_addr;
struct nvme_ctrl ctrl; + struct mutex teardown_lock; bool use_inline_data; u32 io_queues[HCTX_MAX_TYPES]; }; @@ -971,6 +972,7 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new) static void nvme_rdma_teardown_admin_queue(struct nvme_rdma_ctrl *ctrl, bool remove) { + mutex_lock(&ctrl->teardown_lock); blk_mq_quiesce_queue(ctrl->ctrl.admin_q); nvme_rdma_stop_queue(&ctrl->queues[0]); if (ctrl->ctrl.admin_tagset) { @@ -981,11 +983,13 @@ static void nvme_rdma_teardown_admin_queue(struct nvme_rdma_ctrl *ctrl, if (remove) blk_mq_unquiesce_queue(ctrl->ctrl.admin_q); nvme_rdma_destroy_admin_queue(ctrl, remove); + mutex_unlock(&ctrl->teardown_lock); }
static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl, bool remove) { + mutex_lock(&ctrl->teardown_lock); if (ctrl->ctrl.queue_count > 1) { nvme_start_freeze(&ctrl->ctrl); nvme_stop_queues(&ctrl->ctrl); @@ -999,6 +1003,7 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl, nvme_start_queues(&ctrl->ctrl); nvme_rdma_destroy_io_queues(ctrl, remove); } + mutex_unlock(&ctrl->teardown_lock); }
static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl) @@ -2252,6 +2257,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev, return ERR_PTR(-ENOMEM); ctrl->ctrl.opts = opts; INIT_LIST_HEAD(&ctrl->list); + mutex_init(&ctrl->teardown_lock);
if (!(opts->mask & NVMF_OPT_TRSVCID)) { opts->trsvcid =
From: Sagi Grimberg sagi@grimberg.me
[ Upstream commit 0475a8dcbcee92a5d22e40c9c6353829fc6294b8 ]
When a request times out in a LIVE state, we simply trigger error recovery and let the error recovery handle the request cancellation, however when a request times out in a non LIVE state, we make sure to complete it immediately as it might block controller setup or teardown and prevent forward progress.
However tearing down the entire set of I/O and admin queues causes freeze/unfreeze imbalance (q->mq_freeze_depth) because and is really an overkill to what we actually need, which is to just fence controller teardown that may be running, stop the queue, and cancel the request if it is not already completed.
Now that we have the controller teardown_lock, we can safely serialize request cancellation. This addresses a hang caused by calling extra queue freeze on controller namespaces, causing unfreeze to not complete correctly.
Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: James Smart james.smart@broadcom.com Signed-off-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/rdma.c | 49 +++++++++++++++++++++++++++------------- 1 file changed, 33 insertions(+), 16 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index ffe83d1f576bf..99bf88eb812c5 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1159,6 +1159,7 @@ static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl) if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING)) return;
+ dev_warn(ctrl->ctrl.device, "starting error recovery\n"); queue_work(nvme_reset_wq, &ctrl->err_work); }
@@ -1925,6 +1926,22 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id, return 0; }
+static void nvme_rdma_complete_timed_out(struct request *rq) +{ + struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq); + struct nvme_rdma_queue *queue = req->queue; + struct nvme_rdma_ctrl *ctrl = queue->ctrl; + + /* fence other contexts that may complete the command */ + mutex_lock(&ctrl->teardown_lock); + nvme_rdma_stop_queue(queue); + if (!blk_mq_request_completed(rq)) { + nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD; + blk_mq_complete_request(rq); + } + mutex_unlock(&ctrl->teardown_lock); +} + static enum blk_eh_timer_return nvme_rdma_timeout(struct request *rq, bool reserved) { @@ -1935,29 +1952,29 @@ nvme_rdma_timeout(struct request *rq, bool reserved) dev_warn(ctrl->ctrl.device, "I/O %d QID %d timeout\n", rq->tag, nvme_rdma_queue_idx(queue));
- /* - * Restart the timer if a controller reset is already scheduled. Any - * timed out commands would be handled before entering the connecting - * state. - */ - if (ctrl->ctrl.state == NVME_CTRL_RESETTING) - return BLK_EH_RESET_TIMER; - if (ctrl->ctrl.state != NVME_CTRL_LIVE) { /* - * Teardown immediately if controller times out while starting - * or we are already started error recovery. all outstanding - * requests are completed on shutdown, so we return BLK_EH_DONE. + * If we are resetting, connecting or deleting we should + * complete immediately because we may block controller + * teardown or setup sequence + * - ctrl disable/shutdown fabrics requests + * - connect requests + * - initialization admin requests + * - I/O requests that entered after unquiescing and + * the controller stopped responding + * + * All other requests should be cancelled by the error + * recovery work, so it's fine that we fail it here. */ - flush_work(&ctrl->err_work); - nvme_rdma_teardown_io_queues(ctrl, false); - nvme_rdma_teardown_admin_queue(ctrl, false); + nvme_rdma_complete_timed_out(rq); return BLK_EH_DONE; }
- dev_warn(ctrl->ctrl.device, "starting error recovery\n"); + /* + * LIVE state should trigger the normal error recovery which will + * handle completing this request. + */ nvme_rdma_error_recovery(ctrl); - return BLK_EH_RESET_TIMER; }
From: Sagi Grimberg sagi@grimberg.me
[ Upstream commit 2362acb6785611eda795bfc12e1ea6b202ecf62c ]
If the controller becomes unresponsive in the middle of a reset, we will hang because we are waiting for the freeze to complete, but that cannot happen since we have commands that are inflight holding the q_usage_counter, and we can't blindly fail requests that times out.
So give a timeout and if we cannot wait for queue freeze before unfreezing, fail and have the error handling take care how to proceed (either schedule a reconnect of remove the controller).
Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/rdma.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 99bf88eb812c5..6c07bb55b0f83 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -950,7 +950,15 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
if (!new) { nvme_start_queues(&ctrl->ctrl); - nvme_wait_freeze(&ctrl->ctrl); + if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) { + /* + * If we timed out waiting for freeze we are likely to + * be stuck. Fail the controller initialization just + * to be safe. + */ + ret = -ENODEV; + goto out_wait_freeze_timed_out; + } blk_mq_update_nr_hw_queues(ctrl->ctrl.tagset, ctrl->ctrl.queue_count - 1); nvme_unfreeze(&ctrl->ctrl); @@ -958,6 +966,9 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
return 0;
+out_wait_freeze_timed_out: + nvme_stop_queues(&ctrl->ctrl); + nvme_rdma_stop_io_queues(ctrl); out_cleanup_connect_q: if (new) blk_cleanup_queue(ctrl->ctrl.connect_q);
From: Tong Zhang ztong0001@gmail.com
[ Upstream commit 7ad92f656bddff4cf8f641e0e3b1acd4eb9644cb ]
This patch addresses an irq free warning and null pointer dereference error problem when nvme devices got timeout error during initialization. This problem happens when nvme_timeout() function is called while nvme_reset_work() is still in execution. This patch fixed the problem by setting flag of the problematic request to NVME_REQ_CANCELLED before calling nvme_dev_disable() to make sure __nvme_submit_sync_cmd() returns an error code and let nvme_submit_sync_cmd() fail gracefully. The following is console output.
[ 62.472097] nvme nvme0: I/O 13 QID 0 timeout, disable controller [ 62.488796] nvme nvme0: could not set timestamp (881) [ 62.494888] ------------[ cut here ]------------ [ 62.495142] Trying to free already-free IRQ 11 [ 62.495366] WARNING: CPU: 0 PID: 7 at kernel/irq/manage.c:1751 free_irq+0x1f7/0x370 [ 62.495742] Modules linked in: [ 62.495902] CPU: 0 PID: 7 Comm: kworker/u4:0 Not tainted 5.8.0+ #8 [ 62.496206] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-p4 [ 62.496772] Workqueue: nvme-reset-wq nvme_reset_work [ 62.497019] RIP: 0010:free_irq+0x1f7/0x370 [ 62.497223] Code: e8 ce 49 11 00 48 83 c4 08 4c 89 e0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 44 89 f6 48 c70 [ 62.498133] RSP: 0000:ffffa96800043d40 EFLAGS: 00010086 [ 62.498391] RAX: 0000000000000000 RBX: ffff9b87fc458400 RCX: 0000000000000000 [ 62.498741] RDX: 0000000000000001 RSI: 0000000000000096 RDI: ffffffff9693d72c [ 62.499091] RBP: ffff9b87fd4c8f60 R08: ffffa96800043bfd R09: 0000000000000163 [ 62.499440] R10: ffffa96800043bf8 R11: ffffa96800043bfd R12: ffff9b87fd4c8e00 [ 62.499790] R13: ffff9b87fd4c8ea4 R14: 000000000000000b R15: ffff9b87fd76b000 [ 62.500140] FS: 0000000000000000(0000) GS:ffff9b87fdc00000(0000) knlGS:0000000000000000 [ 62.500534] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 62.500816] CR2: 0000000000000000 CR3: 000000003aa0a000 CR4: 00000000000006f0 [ 62.501165] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 62.501515] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 62.501864] Call Trace: [ 62.501993] pci_free_irq+0x13/0x20 [ 62.502167] nvme_reset_work+0x5d0/0x12a0 [ 62.502369] ? update_load_avg+0x59/0x580 [ 62.502569] ? ttwu_queue_wakelist+0xa8/0xc0 [ 62.502780] ? try_to_wake_up+0x1a2/0x450 [ 62.502979] process_one_work+0x1d2/0x390 [ 62.503179] worker_thread+0x45/0x3b0 [ 62.503361] ? process_one_work+0x390/0x390 [ 62.503568] kthread+0xf9/0x130 [ 62.503726] ? kthread_park+0x80/0x80 [ 62.503911] ret_from_fork+0x22/0x30 [ 62.504090] ---[ end trace de9ed4a70f8d71e2 ]--- [ 123.912275] nvme nvme0: I/O 12 QID 0 timeout, disable controller [ 123.914670] nvme nvme0: 1/0/0 default/read/poll queues [ 123.916310] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 123.917469] #PF: supervisor write access in kernel mode [ 123.917725] #PF: error_code(0x0002) - not-present page [ 123.917976] PGD 0 P4D 0 [ 123.918109] Oops: 0002 [#1] SMP PTI [ 123.918283] CPU: 0 PID: 7 Comm: kworker/u4:0 Tainted: G W 5.8.0+ #8 [ 123.918650] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-p4 [ 123.919219] Workqueue: nvme-reset-wq nvme_reset_work [ 123.919469] RIP: 0010:__blk_mq_alloc_map_and_request+0x21/0x80 [ 123.919757] Code: 66 0f 1f 84 00 00 00 00 00 41 55 41 54 55 48 63 ee 53 48 8b 47 68 89 ee 48 89 fb 8b4 [ 123.920657] RSP: 0000:ffffa96800043d40 EFLAGS: 00010286 [ 123.920912] RAX: ffff9b87fc4fee40 RBX: ffff9b87fc8cb008 RCX: 0000000000000000 [ 123.921258] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9b87fc618000 [ 123.921602] RBP: 0000000000000000 R08: ffff9b87fdc2c4a0 R09: ffff9b87fc616000 [ 123.921949] R10: 0000000000000000 R11: ffff9b87fffd1500 R12: 0000000000000000 [ 123.922295] R13: 0000000000000000 R14: ffff9b87fc8cb200 R15: ffff9b87fc8cb000 [ 123.922641] FS: 0000000000000000(0000) GS:ffff9b87fdc00000(0000) knlGS:0000000000000000 [ 123.923032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 123.923312] CR2: 0000000000000000 CR3: 000000003aa0a000 CR4: 00000000000006f0 [ 123.923660] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 123.924007] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 123.924353] Call Trace: [ 123.924479] blk_mq_alloc_tag_set+0x137/0x2a0 [ 123.924694] nvme_reset_work+0xed6/0x12a0 [ 123.924898] process_one_work+0x1d2/0x390 [ 123.925099] worker_thread+0x45/0x3b0 [ 123.925280] ? process_one_work+0x390/0x390 [ 123.925486] kthread+0xf9/0x130 [ 123.925642] ? kthread_park+0x80/0x80 [ 123.925825] ret_from_fork+0x22/0x30 [ 123.926004] Modules linked in: [ 123.926158] CR2: 0000000000000000 [ 123.926322] ---[ end trace de9ed4a70f8d71e3 ]--- [ 123.926549] RIP: 0010:__blk_mq_alloc_map_and_request+0x21/0x80 [ 123.926832] Code: 66 0f 1f 84 00 00 00 00 00 41 55 41 54 55 48 63 ee 53 48 8b 47 68 89 ee 48 89 fb 8b4 [ 123.927734] RSP: 0000:ffffa96800043d40 EFLAGS: 00010286 [ 123.927989] RAX: ffff9b87fc4fee40 RBX: ffff9b87fc8cb008 RCX: 0000000000000000 [ 123.928336] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9b87fc618000 [ 123.928679] RBP: 0000000000000000 R08: ffff9b87fdc2c4a0 R09: ffff9b87fc616000 [ 123.929025] R10: 0000000000000000 R11: ffff9b87fffd1500 R12: 0000000000000000 [ 123.929370] R13: 0000000000000000 R14: ffff9b87fc8cb200 R15: ffff9b87fc8cb000 [ 123.929715] FS: 0000000000000000(0000) GS:ffff9b87fdc00000(0000) knlGS:0000000000000000 [ 123.930106] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 123.930384] CR2: 0000000000000000 CR3: 000000003aa0a000 CR4: 00000000000006f0 [ 123.930731] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 123.931077] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Co-developed-by: Keith Busch kbusch@kernel.org Signed-off-by: Tong Zhang ztong0001@gmail.com Reviewed-by: Keith Busch kbusch@kernel.org Signed-off-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/pci.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index d4b1ff7471231..69a19fe241063 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1250,8 +1250,8 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) dev_warn_ratelimited(dev->ctrl.device, "I/O %d QID %d timeout, disable controller\n", req->tag, nvmeq->qid); - nvme_dev_disable(dev, true); nvme_req(req)->flags |= NVME_REQ_CANCELLED; + nvme_dev_disable(dev, true); return BLK_EH_DONE; case NVME_CTRL_RESETTING: return BLK_EH_RESET_TIMER; @@ -1268,10 +1268,10 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) dev_warn(dev->ctrl.device, "I/O %d QID %d timeout, reset controller\n", req->tag, nvmeq->qid); + nvme_req(req)->flags |= NVME_REQ_CANCELLED; nvme_dev_disable(dev, false); nvme_reset_ctrl(&dev->ctrl);
- nvme_req(req)->flags |= NVME_REQ_CANCELLED; return BLK_EH_DONE; }
From: Nirenjan Krishnan nirenjan@gmail.com
[ Upstream commit 77df710ba633dfb6c65c65cf99ea9e084a1c9933 ]
The Saitek X52 family of joysticks has a pair of axes that were originally (by the Windows driver) used as mouse pointer controls. The corresponding usage page is the Game Controls page, which is not recognized by the generic HID driver, and therefore, both axes get mapped to ABS_MISC. The quirk makes the second axis get mapped to ABS_MISC+1, and therefore made available separately.
One Saitek X52 device is already fixed. This patch fixes the other two known devices with VID/PID 06a3:0255 and 06a3:0762.
Signed-off-by: Nirenjan Krishnan nirenjan@gmail.com Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/hid-ids.h | 2 ++ drivers/hid/hid-quirks.c | 2 ++ 2 files changed, 4 insertions(+)
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index 7cfa9785bfbb0..476cb99bd61b7 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -1011,6 +1011,8 @@ #define USB_DEVICE_ID_SAITEK_RAT9 0x0cfa #define USB_DEVICE_ID_SAITEK_MMO7 0x0cd0 #define USB_DEVICE_ID_SAITEK_X52 0x075c +#define USB_DEVICE_ID_SAITEK_X52_2 0x0255 +#define USB_DEVICE_ID_SAITEK_X52_PRO 0x0762
#define USB_VENDOR_ID_SAMSUNG 0x0419 #define USB_DEVICE_ID_SAMSUNG_IR_REMOTE 0x0001 diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c index c242150d35a3a..5247147f5cc7c 100644 --- a/drivers/hid/hid-quirks.c +++ b/drivers/hid/hid-quirks.c @@ -147,6 +147,8 @@ static const struct hid_device_id hid_quirks[] = { { HID_USB_DEVICE(USB_VENDOR_ID_RETROUSB, USB_DEVICE_ID_RETROUSB_SNES_RETROPORT), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RUMBLEPAD), HID_QUIRK_BADPAD }, { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, + { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52_2), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, + { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52_PRO), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD2), HID_QUIRK_NO_INIT_REPORTS }, { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD), HID_QUIRK_NO_INIT_REPORTS }, { HID_USB_DEVICE(USB_VENDOR_ID_SENNHEISER, USB_DEVICE_ID_SENNHEISER_BTD500USB), HID_QUIRK_NOGET },
From: Nicholas Miell nmiell@gmail.com
[ Upstream commit 724a419ea28f7514a391e80040230f69cf626707 ]
When operating in XInput mode, the 8bitdo SN30 Pro+ requires the same quirk as the official Xbox One Bluetooth controllers for rumble to function.
Other controllers like the N30 Pro 2, SF30 Pro, SN30 Pro, etc. probably also need this quirk, but I do not have the hardware to test.
Signed-off-by: Nicholas Miell nmiell@gmail.com Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/hid-ids.h | 1 + drivers/hid/hid-microsoft.c | 2 ++ 2 files changed, 3 insertions(+)
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index 476cb99bd61b7..95f9d285b3c6b 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -846,6 +846,7 @@ #define USB_DEVICE_ID_MS_POWER_COVER 0x07da #define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd #define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb +#define USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS 0x02e0
#define USB_VENDOR_ID_MOJO 0x8282 #define USB_DEVICE_ID_RETRO_ADAPTER 0x3201 diff --git a/drivers/hid/hid-microsoft.c b/drivers/hid/hid-microsoft.c index 2d8b589201a4e..8cb1ca1936e42 100644 --- a/drivers/hid/hid-microsoft.c +++ b/drivers/hid/hid-microsoft.c @@ -451,6 +451,8 @@ static const struct hid_device_id ms_devices[] = { .driver_data = MS_SURFACE_DIAL }, { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER), .driver_data = MS_QUIRK_FF }, + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS), + .driver_data = MS_QUIRK_FF }, { } }; MODULE_DEVICE_TABLE(hid, ms_devices);
From: Xie He xie.he.0141@gmail.com
[ Upstream commit 1a545ebe380bf4c1433e3c136e35a77764fda5ad ]
This driver didn't set hard_header_len. This patch sets hard_header_len for it according to its header_ops->create function.
This driver's header_ops->create function (cisco_hard_header) creates a header of (struct hdlc_header), so hard_header_len should be set to sizeof(struct hdlc_header).
Cc: Martin Schiller ms@dev.tdt.de Signed-off-by: Xie He xie.he.0141@gmail.com Acked-by: Krzysztof Halasa khc@pm.waw.pl Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wan/hdlc_cisco.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/wan/hdlc_cisco.c b/drivers/net/wan/hdlc_cisco.c index d8cba3625c185..444130655d8ea 100644 --- a/drivers/net/wan/hdlc_cisco.c +++ b/drivers/net/wan/hdlc_cisco.c @@ -370,6 +370,7 @@ static int cisco_ioctl(struct net_device *dev, struct ifreq *ifr) memcpy(&state(hdlc)->settings, &new_settings, size); spin_lock_init(&state(hdlc)->lock); dev->header_ops = &cisco_header_ops; + dev->hard_header_len = sizeof(struct hdlc_header); dev->type = ARPHRD_CISCO; call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE, dev); netif_dormant_on(dev);
From: Dinghao Liu dinghao.liu@zju.edu.cn
[ Upstream commit b7429ea53d6c0936a0f10a5d64164f0aea440143 ]
When input_mt_init_slots() fails, input should be freed to prevent memleak. When input_register_device() fails, we should call input_mt_destroy_slots() to free memory allocated by input_mt_init_slots().
Signed-off-by: Dinghao Liu dinghao.liu@zju.edu.cn Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/hid-elan.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/hid/hid-elan.c b/drivers/hid/hid-elan.c index 45c4f888b7c4e..dae193749d443 100644 --- a/drivers/hid/hid-elan.c +++ b/drivers/hid/hid-elan.c @@ -188,6 +188,7 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi) ret = input_mt_init_slots(input, ELAN_MAX_FINGERS, INPUT_MT_POINTER); if (ret) { hid_err(hdev, "Failed to init elan MT slots: %d\n", ret); + input_free_device(input); return ret; }
@@ -198,6 +199,7 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi) if (ret) { hid_err(hdev, "Failed to register elan input device: %d\n", ret); + input_mt_destroy_slots(input); input_free_device(input); return ret; }
From: Evgeniy Didin Evgeniy.Didin@synopsys.com
[ Upstream commit 26907eb605fbc3ba9dbf888f21d9d8d04471271d ]
HSDK board has Micrel KSZ9031, recent commit bcf3440c6dd ("net: phy: micrel: add phy-mode support for the KSZ9031 PHY") caused a breakdown of Ethernet. Using 'phy-mode = "rgmii"' is not correct because accodring RGMII specification it is necessary to have delay on RX (PHY to MAX) which is not generated in case of "rgmii". Using "rgmii-id" adds necessary delay and solves the issue.
Also adding name of PHY placed on HSDK board.
Signed-off-by: Evgeniy Didin Evgeniy.Didin@synopsys.com Cc: Eugeniy Paltsev Eugeniy.Paltsev@synopsys.com Cc: Alexey Brodkin abrodkin@synopsys.com Signed-off-by: Vineet Gupta vgupta@synopsys.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arc/boot/dts/hsdk.dts | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arc/boot/dts/hsdk.dts b/arch/arc/boot/dts/hsdk.dts index 5d64a5a940ee6..dcaa44e408ace 100644 --- a/arch/arc/boot/dts/hsdk.dts +++ b/arch/arc/boot/dts/hsdk.dts @@ -210,7 +210,7 @@ gmac: ethernet@8000 { reg = <0x8000 0x2000>; interrupts = <10>; interrupt-names = "macirq"; - phy-mode = "rgmii"; + phy-mode = "rgmii-id"; snps,pbl = <32>; snps,multicast-filter-bins = <256>; clocks = <&gmacclk>; @@ -228,7 +228,7 @@ mdio { #address-cells = <1>; #size-cells = <0>; compatible = "snps,dwmac-mdio"; - phy0: ethernet-phy@0 { + phy0: ethernet-phy@0 { /* Micrel KSZ9031 */ reg = <0>; }; };
From: "Rafael J. Wysocki" rafael.j.wysocki@intel.com
[ Upstream commit 43298db3009f06fe5c69e1ca8b6cfc2565772fa1 ]
After commit f6ebbcf08f37 ("cpufreq: intel_pstate: Implement passive mode with HWP enabled") it is possible to change the driver status to "off" via sysfs with HWP enabled, which effectively causes the driver to unregister itself, but HWP remains active and it forces the minimum performance, so even if another cpufreq driver is loaded, it will not be able to control the CPU frequency.
For this reason, make the driver refuse to change the status to "off" with HWP enabled.
Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Acked-by: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cpufreq/intel_pstate.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index 8c730a47e0537..97ed4fd0f1342 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -2534,9 +2534,15 @@ static int intel_pstate_update_status(const char *buf, size_t size) { int ret;
- if (size == 3 && !strncmp(buf, "off", size)) - return intel_pstate_driver ? - intel_pstate_unregister_driver() : -EINVAL; + if (size == 3 && !strncmp(buf, "off", size)) { + if (!intel_pstate_driver) + return -EINVAL; + + if (hwp_active) + return -EBUSY; + + return intel_pstate_unregister_driver(); + }
if (size == 6 && !strncmp(buf, "active", size)) { if (intel_pstate_driver) {
From: Francisco Jerez currojerez@riseup.net
[ Upstream commit eacc9c5a927e474c173a5d53dd7fb8e306511768 ]
This fixes the behavior of the scaling_max_freq and scaling_min_freq sysfs files in systems which had turbo disabled by the BIOS.
Caleb noticed that the HWP is programmed to operate in the wrong P-state range on his system when the CPUFREQ policy min/max frequency is set via sysfs. This seems to be because in his system intel_pstate_get_hwp_max() is returning the maximum turbo P-state even though turbo was disabled by the BIOS, which causes intel_pstate to scale kHz frequencies incorrectly e.g. setting the maximum turbo frequency whenever the maximum guaranteed frequency is requested via sysfs.
Tested-by: Caleb Callaway caleb.callaway@intel.com Signed-off-by: Francisco Jerez currojerez@riseup.net Acked-by: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com [ rjw: Minor subject edits ] Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cpufreq/intel_pstate.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index 97ed4fd0f1342..36a469150ff9c 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -762,7 +762,7 @@ static void intel_pstate_get_hwp_max(unsigned int cpu, int *phy_max,
rdmsrl_on_cpu(cpu, MSR_HWP_CAPABILITIES, &cap); WRITE_ONCE(all_cpu_data[cpu]->hwp_cap_cached, cap); - if (global.no_turbo) + if (global.no_turbo || global.turbo_disabled) *current_max = HWP_GUARANTEED_PERF(cap); else *current_max = HWP_HIGHEST_PERF(cap);
From: Kamil Lorenc kamil@re-ws.pl
[ Upstream commit a609d0259183a841621f252e067f40f8cc25d6f6 ]
Keenetic Plus DSL is a xDSL modem that uses dm9620 as its USB interface.
Signed-off-by: Kamil Lorenc kamil@re-ws.pl Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/usb/dm9601.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/net/usb/dm9601.c b/drivers/net/usb/dm9601.c index b91f92e4e5f22..915ac75b55fc7 100644 --- a/drivers/net/usb/dm9601.c +++ b/drivers/net/usb/dm9601.c @@ -625,6 +625,10 @@ static const struct usb_device_id products[] = { USB_DEVICE(0x0a46, 0x1269), /* DM9621A USB to Fast Ethernet Adapter */ .driver_info = (unsigned long)&dm9601_info, }, + { + USB_DEVICE(0x0586, 0x3427), /* ZyXEL Keenetic Plus DSL xDSL modem */ + .driver_info = (unsigned long)&dm9601_info, + }, {}, // END };
From: Jessica Yu jeyu@kernel.org
[ Upstream commit e0328feda79d9681b3e3245e6e180295550c8ee9 ]
In the arm64 module linker script, the section .text.ftrace_trampoline is specified unconditionally regardless of whether CONFIG_DYNAMIC_FTRACE is enabled (this is simply due to the limitation that module linker scripts are not preprocessed like the vmlinux one).
Normally, for .plt and .text.ftrace_trampoline, the section flags present in the module binary wouldn't matter since module_frob_arch_sections() would assign them manually anyway. However, the arm64 module loader only sets the section flags for .text.ftrace_trampoline when CONFIG_DYNAMIC_FTRACE=y. That's only become problematic recently due to a recent change in binutils-2.35, where the .text.ftrace_trampoline section (along with the .plt section) is now marked writable and executable (WAX).
We no longer allow writable and executable sections to be loaded due to commit 5c3a7db0c7ec ("module: Harden STRICT_MODULE_RWX"), so this is causing all modules linked with binutils-2.35 to be rejected under arm64. Drop the IS_ENABLED(CONFIG_DYNAMIC_FTRACE) check in module_frob_arch_sections() so that the section flags for .text.ftrace_trampoline get properly set to SHF_EXECINSTR|SHF_ALLOC, without SHF_WRITE.
Signed-off-by: Jessica Yu jeyu@kernel.org Acked-by: Will Deacon will@kernel.org Acked-by: Ard Biesheuvel ardb@kernel.org Link: http://lore.kernel.org/r/20200831094651.GA16385@linux-8ccs Link: https://lore.kernel.org/r/20200901160016.3646-1-jeyu@kernel.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/kernel/module-plts.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c index 65b08a74aec65..37c0b51a7b7b5 100644 --- a/arch/arm64/kernel/module-plts.c +++ b/arch/arm64/kernel/module-plts.c @@ -271,8 +271,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs, mod->arch.core.plt_shndx = i; else if (!strcmp(secstrings + sechdrs[i].sh_name, ".init.plt")) mod->arch.init.plt_shndx = i; - else if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) && - !strcmp(secstrings + sechdrs[i].sh_name, + else if (!strcmp(secstrings + sechdrs[i].sh_name, ".text.ftrace_trampoline")) tramp = sechdrs + i; else if (sechdrs[i].sh_type == SHT_SYMTAB)
From: Rander Wang rander.wang@intel.com
[ Upstream commit f804a324a41a880c1ab43cc5145d8b3e5790430d ]
Add Rocketlake HDMI codec support. Rocketlake shares the pin-to-port mapping table with Tigerlake.
Signed-off-by: Rander Wang rander.wang@intel.com Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Reviewed-by: Ranjani Sridharan ranjani.sridharan@linux.intel.com Signed-off-by: Kai Vehmanen kai.vehmanen@linux.intel.com Link: https://lore.kernel.org/r/20200902154207.1440393-1-kai.vehmanen@linux.intel.... Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_hdmi.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c index 9442127d9b15d..43ae35338a383 100644 --- a/sound/pci/hda/patch_hdmi.c +++ b/sound/pci/hda/patch_hdmi.c @@ -4204,6 +4204,7 @@ HDA_CODEC_ENTRY(0x8086280c, "Cannonlake HDMI", patch_i915_glk_hdmi), HDA_CODEC_ENTRY(0x8086280d, "Geminilake HDMI", patch_i915_glk_hdmi), HDA_CODEC_ENTRY(0x8086280f, "Icelake HDMI", patch_i915_icl_hdmi), HDA_CODEC_ENTRY(0x80862812, "Tigerlake HDMI", patch_i915_tgl_hdmi), +HDA_CODEC_ENTRY(0x80862816, "Rocketlake HDMI", patch_i915_tgl_hdmi), HDA_CODEC_ENTRY(0x8086281a, "Jasperlake HDMI", patch_i915_icl_hdmi), HDA_CODEC_ENTRY(0x8086281b, "Elkhartlake HDMI", patch_i915_icl_hdmi), HDA_CODEC_ENTRY(0x80862880, "CedarTrail HDMI", patch_generic_hdmi),
From: Rander Wang rander.wang@intel.com
[ Upstream commit 13774d81f38538c5fa2924bdcdfa509155480fa6 ]
In snd_hdac_device_init pm_runtime_set_active is called to increase child_count in parent device. But when it is failed to build connection with GPU for one case that integrated graphic gpu is disabled, snd_hdac_ext_bus_device_exit will be invoked to clean up a HD-audio extended codec base device. At this time the child_count of parent is not decreased, which makes parent device can't get suspended.
This patch calls pm_runtime_set_suspended to decrease child_count in parent device in snd_hdac_device_exit to match with snd_hdac_device_init. pm_runtime_set_suspended can make sure that it will not decrease child_count if the device is already suspended.
Signed-off-by: Rander Wang rander.wang@intel.com Reviewed-by: Ranjani Sridharan ranjani.sridharan@linux.intel.com Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Reviewed-by: Bard Liao yung-chuan.liao@linux.intel.com Reviewed-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com Signed-off-by: Kai Vehmanen kai.vehmanen@linux.intel.com Link: https://lore.kernel.org/r/20200902154218.1440441-1-kai.vehmanen@linux.intel.... Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/hda/hdac_device.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/sound/hda/hdac_device.c b/sound/hda/hdac_device.c index 333220f0f8afc..3e9e9ac804f62 100644 --- a/sound/hda/hdac_device.c +++ b/sound/hda/hdac_device.c @@ -127,6 +127,8 @@ EXPORT_SYMBOL_GPL(snd_hdac_device_init); void snd_hdac_device_exit(struct hdac_device *codec) { pm_runtime_put_noidle(&codec->dev); + /* keep balance of runtime PM child_count in parent device */ + pm_runtime_set_suspended(&codec->dev); snd_hdac_bus_remove_device(codec->bus, codec); kfree(codec->vendor_name); kfree(codec->chip_name);
From: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com
[ Upstream commit b79de57b4378a93115307be6962d05b099eb0f37 ]
We use HDaudio and HDAudio, pick one to make searches easier. No functionality change
Also fix timestamping typo in documentation.
Reported-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com Signed-off-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Reviewed-by: Bard Liao yung-chuan.liao@linux.intel.com Reviewed-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com Signed-off-by: Kai Vehmanen kai.vehmanen@linux.intel.com Link: https://lore.kernel.org/r/20200902154250.1440585-1-kai.vehmanen@linux.intel.... Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- Documentation/sound/designs/timestamping.rst | 2 +- sound/hda/intel-dsp-config.c | 10 +++++----- sound/x86/Kconfig | 2 +- 3 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/Documentation/sound/designs/timestamping.rst b/Documentation/sound/designs/timestamping.rst index 2b0fff5034151..7c7ecf5dbc4bd 100644 --- a/Documentation/sound/designs/timestamping.rst +++ b/Documentation/sound/designs/timestamping.rst @@ -143,7 +143,7 @@ timestamp shows when the information is put together by the driver before returning from the ``STATUS`` and ``STATUS_EXT`` ioctl. in most cases this driver_timestamp will be identical to the regular system tstamp.
-Examples of typestamping with HDaudio: +Examples of timestamping with HDAudio:
1. DMA timestamp, no compensation for DMA+analog delay :: diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c index 99aec73491676..1c5114dedda92 100644 --- a/sound/hda/intel-dsp-config.c +++ b/sound/hda/intel-dsp-config.c @@ -54,7 +54,7 @@ static const struct config_entry config_table[] = { #endif /* * Apollolake (Broxton-P) - * the legacy HDaudio driver is used except on Up Squared (SOF) and + * the legacy HDAudio driver is used except on Up Squared (SOF) and * Chromebooks (SST) */ #if IS_ENABLED(CONFIG_SND_SOC_SOF_APOLLOLAKE) @@ -89,7 +89,7 @@ static const struct config_entry config_table[] = { }, #endif /* - * Skylake and Kabylake use legacy HDaudio driver except for Google + * Skylake and Kabylake use legacy HDAudio driver except for Google * Chromebooks (SST) */
@@ -135,7 +135,7 @@ static const struct config_entry config_table[] = { #endif
/* - * Geminilake uses legacy HDaudio driver except for Google + * Geminilake uses legacy HDAudio driver except for Google * Chromebooks */ /* Geminilake */ @@ -157,7 +157,7 @@ static const struct config_entry config_table[] = {
/* * CoffeeLake, CannonLake, CometLake, IceLake, TigerLake use legacy - * HDaudio driver except for Google Chromebooks and when DMICs are + * HDAudio driver except for Google Chromebooks and when DMICs are * present. Two cases are required since Coreboot does not expose NHLT * tables. * @@ -391,7 +391,7 @@ int snd_intel_dsp_driver_probe(struct pci_dev *pci) if (pci->class == 0x040300) return SND_INTEL_DSP_DRIVER_LEGACY; if (pci->class != 0x040100 && pci->class != 0x040380) { - dev_err(&pci->dev, "Unknown PCI class/subclass/prog-if information (0x%06x) found, selecting HDA legacy driver\n", pci->class); + dev_err(&pci->dev, "Unknown PCI class/subclass/prog-if information (0x%06x) found, selecting HDAudio legacy driver\n", pci->class); return SND_INTEL_DSP_DRIVER_LEGACY; }
diff --git a/sound/x86/Kconfig b/sound/x86/Kconfig index 77777192f6508..4ffcc5e623c22 100644 --- a/sound/x86/Kconfig +++ b/sound/x86/Kconfig @@ -9,7 +9,7 @@ menuconfig SND_X86 if SND_X86
config HDMI_LPE_AUDIO - tristate "HDMI audio without HDaudio on Intel Atom platforms" + tristate "HDMI audio without HDAudio on Intel Atom platforms" depends on DRM_I915 select SND_PCM help
From: Xie He xie.he.0141@gmail.com
[ Upstream commit 2b7bcd967a0f5b7ac9bb0c37b92de36e073dd119 ]
Change the default value of hard_header_len in hdlc.c from 16 to 0.
Currently there are 6 HDLC protocol drivers, among them:
hdlc_raw_eth, hdlc_cisco, hdlc_ppp, hdlc_x25 set hard_header_len when attaching the protocol, overriding the default. So this patch does not affect them.
hdlc_raw and hdlc_fr don't set hard_header_len when attaching the protocol. So this patch will change the hard_header_len of the HDLC device for them from 16 to 0.
This is the correct change because both hdlc_raw and hdlc_fr don't have header_ops, and the code in net/packet/af_packet.c expects the value of hard_header_len to be consistent with header_ops.
In net/packet/af_packet.c, in the packet_snd function, for AF_PACKET/DGRAM sockets it would reserve a headroom of hard_header_len and call dev_hard_header to fill in that headroom, and for AF_PACKET/RAW sockets, it does not reserve the headroom and does not call dev_hard_header, but checks if the user has provided a header of length hard_header_len (in function dev_validate_header).
Cc: Krzysztof Halasa khc@pm.waw.pl Cc: Martin Schiller ms@dev.tdt.de Signed-off-by: Xie He xie.he.0141@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wan/hdlc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/wan/hdlc.c b/drivers/net/wan/hdlc.c index 386ed2aa31fd9..9b00708676cf7 100644 --- a/drivers/net/wan/hdlc.c +++ b/drivers/net/wan/hdlc.c @@ -229,7 +229,7 @@ static void hdlc_setup_dev(struct net_device *dev) dev->min_mtu = 68; dev->max_mtu = HDLC_MAX_MTU; dev->type = ARPHRD_RAWHDLC; - dev->hard_header_len = 16; + dev->hard_header_len = 0; dev->needed_headroom = 0; dev->addr_len = 0; dev->header_ops = &hdlc_null_ops;
From: Sandeep Raghuraman sandy.8925@gmail.com
[ Upstream commit d98299885c9ea140c1108545186593deba36c4ac ]
On my R9 390, the voltage was reported as a constant 1000 mV. This was due to a bug in smu7_hwmgr.c, in the smu7_read_sensor() function, where some magic constants were used in a condition, to determine whether the voltage should be read from PLANE2_VID or PLANE1_VID. The VDDC mask was incorrectly used, instead of the VDDGFX mask.
This patch changes the code to use the correct defined constants (and apply the correct bitshift), thus resulting in correct voltage reporting.
Signed-off-by: Sandeep Raghuraman sandy.8925@gmail.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c index 753cb2cf6b77e..3adf9c1dfdbb0 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c @@ -3587,7 +3587,8 @@ static int smu7_read_sensor(struct pp_hwmgr *hwmgr, int idx, case AMDGPU_PP_SENSOR_GPU_POWER: return smu7_get_gpu_power(hwmgr, (uint32_t *)value); case AMDGPU_PP_SENSOR_VDDGFX: - if ((data->vr_config & 0xff) == 0x2) + if ((data->vr_config & VRCONF_VDDGFX_MASK) == + (VR_SVI2_PLANE_2 << VRCONF_VDDGFX_SHIFT)) val_vid = PHM_READ_INDIRECT_FIELD(hwmgr->device, CGS_IND_REG__SMC, PWR_SVI2_STATUS, PLANE2_VID); else
From: Joerg Roedel jroedel@suse.de
[ Upstream commit 7cad554887f1c5fd77e57e6bf4be38370c2160cb ]
Do not force devices supporting IOMMUv2 to be direct mapped when memory encryption is active. This might cause them to be unusable because their DMA mask does not include the encryption bit.
Signed-off-by: Joerg Roedel jroedel@suse.de Link: https://lore.kernel.org/r/20200824105415.21000-2-joro@8bytes.org Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/amd/iommu.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index 2f22326ee4dfe..547b41e376574 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -2650,7 +2650,12 @@ static int amd_iommu_def_domain_type(struct device *dev) if (!dev_data) return 0;
- if (dev_data->iommu_v2) + /* + * Do not identity map IOMMUv2 capable devices when memory encryption is + * active, because some of those devices (AMD GPUs) don't have the + * encryption bit in their DMA-mask and require remapping. + */ + if (!mem_encrypt_active() && dev_data->iommu_v2) return IOMMU_DOMAIN_IDENTITY;
return 0;
From: Joerg Roedel jroedel@suse.de
[ Upstream commit 2822e582501b65707089b097e773e6fd70774841 ]
When memory encryption is active the device is likely not in a direct mapped domain. Forbid using IOMMUv2 functionality for now until finer grained checks for this have been implemented.
Signed-off-by: Joerg Roedel jroedel@suse.de Link: https://lore.kernel.org/r/20200824105415.21000-3-joro@8bytes.org Signed-off-by: Joerg Roedel jroedel@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/iommu/amd/iommu_v2.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c index e4b025c5637c4..5a188cac7a0f1 100644 --- a/drivers/iommu/amd/iommu_v2.c +++ b/drivers/iommu/amd/iommu_v2.c @@ -737,6 +737,13 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids)
might_sleep();
+ /* + * When memory encryption is active the device is likely not in a + * direct-mapped domain. Forbid using IOMMUv2 functionality for now. + */ + if (mem_encrypt_active()) + return -ENODEV; + if (!amd_iommu_v2_supported()) return -ENODEV;
From: Leon Romanovsky leonro@nvidia.com
[ Upstream commit cfc905f158eaa099d6258031614d11869e7ef71c ]
GCOV built with GCC 10 doesn't initialize n_function variable. This produces different kernel panics as was seen by Colin in Ubuntu and me in FC 32.
As a workaround, let's disable GCOV build for broken GCC 10 version.
Link: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1891288 Link: https://lore.kernel.org/lkml/20200827133932.3338519-1-leon@kernel.org Link: https://lore.kernel.org/lkml/CAHk-=whbijeSdSvx-Xcr0DPMj0BiwhJ+uiNnDSVZcr_h_k... Cc: Colin Ian King colin.king@canonical.com Signed-off-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/gcov/Kconfig | 1 + 1 file changed, 1 insertion(+)
diff --git a/kernel/gcov/Kconfig b/kernel/gcov/Kconfig index 3110c77230c7f..bb4b680e8455a 100644 --- a/kernel/gcov/Kconfig +++ b/kernel/gcov/Kconfig @@ -4,6 +4,7 @@ menu "GCOV-based kernel profiling" config GCOV_KERNEL bool "Enable gcov-based kernel profiling" depends on DEBUG_FS + depends on !CC_IS_GCC || GCC_VERSION < 100000 select CONSTRUCTORS if !UML default n help
linux-stable-mirror@lists.linaro.org