This is the start of the stable review cycle for the 4.14.52 release. There are 52 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Tue Jun 26 14:27:24 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.52-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 4.14.52-rc1
Vlastimil Babka vbabka@suse.cz mm, page_alloc: do not break __GFP_THISNODE by zonelist reset
Thadeu Lima de Souza Cascardo cascardo@canonical.com fs/binfmt_misc.c: do not allow offset overflow
Michael S. Tsirkin mst@redhat.com vhost: fix info leak due to uninitialized memory
Jason Gerecke killertofu@gmail.com HID: wacom: Correct logical maximum Y for 2nd-gen Intuos Pro large
Even Xu even.xu@intel.com HID: intel_ish-hid: ipc: register more pm callbacks to support hibernation
Martin Brandenburg martin@omnibond.com orangefs: report attributes_mask and attributes for statx
Martin Brandenburg martin@omnibond.com orangefs: set i_size on new symlink
Luca Coelho luciano.coelho@intel.com iwlwifi: fw: harden page loading code
Tony Luck tony.luck@intel.com x86/intel_rdt: Enable CMT and MBM on new Skylake stepping
Stefan Potyra Stefan.Potyra@elektrobit.com w1: mxc_w1: Enable clock before calling clk_get_rate() on it
Hans de Goede hdegoede@redhat.com libata: Drop SanDisk SD7UB3Q*G1001 NOLPM quirk
Dan Carpenter dan.carpenter@oracle.com libata: zpodd: small read overflow in eject_tray()
Chen Yu yu.c.chen@intel.com cpufreq: governors: Fix long idle detection logic in load calculation
Tao Wang kevin.wangtao@hisilicon.com cpufreq: Fix new policy initialization during limits updates via sysfs
Tejun Heo tj@kernel.org bdi: Move cgroup bdi_writeback to a dedicated low concurrency workqueue
Roman Pen roman.penyaev@profitbricks.com blk-mq: reinit q->tag_set_list entry only after grace period
Josef Bacik jbacik@fb.com nbd: use bd_set_size when updating disk size
Josef Bacik jbacik@fb.com nbd: update size when connected
Josef Bacik jbacik@fb.com nbd: fix nbd device deletion
Shirish Pargaonkar shirishpargaonkar@gmail.com cifs: For SMB2 security informaion query, check for minimum sized security descriptor instead of sizeof FileAllInformation class
Mark Syms mark.syms@citrix.com CIFS: 511c54a2f69195b28afb9dd119f03787b1625bb4 adds a check for session expiry
Steve French stfrench@microsoft.com smb3: on reconnect set PreviousSessionId field
Steve French stfrench@microsoft.com smb3: fix various xid leaks
Tony Luck tony.luck@intel.com x86/MCE: Fix stack out-of-bounds write in mce-inject.c: Flags_read()
Dennis Wassenberg dennis.wassenberg@secunet.com ALSA: hda: add dock and led support for HP ProBook 640 G4
Dennis Wassenberg dennis.wassenberg@secunet.com ALSA: hda: add dock and led support for HP EliteBook 830 G5
Bo Chen chenbo@pdx.edu ALSA: hda - Handle kzalloc() failure in snd_hda_attach_pcm_stream()
Takashi Iwai tiwai@suse.de ALSA: hda/conexant - Add fixup for HP Z2 G4 workstation
Hui Wang hui.wang@canonical.com ALSA: hda/realtek - Enable mic-mute hotkey for several Lenovo AIOs
Qu Wenruo wqu@suse.com btrfs: scrub: Don't use inode pages for device replace
Su Yue suy.fnst@cn.fujitsu.com btrfs: return error value if create_io_em failed in cow_file_range
Omar Sandoval osandov@fb.com Btrfs: fix memory and mount leak in btrfs_ioctl_rm_dev_v2()
Omar Sandoval osandov@fb.com Btrfs: fix clone vs chattr NODATASUM race
Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp driver core: Don't ignore class_dir_create_and_add() failure.
Jan Kara jack@suse.cz ext4: fix fencepost error in check for inode count overflow during resize
Theodore Ts'o tytso@mit.edu ext4: correctly handle a zero-length xattr with a non-zero e_value_offs
Theodore Ts'o tytso@mit.edu ext4: bubble errors from ext4_find_inline_data_nolock() up to ext4_iget()
Theodore Ts'o tytso@mit.edu ext4: do not allow external inodes for inline data
Lukas Czerner lczerner@redhat.com ext4: update mtime in ext4_punch_hole even if no blocks are released
Jan Kara jack@suse.cz ext4: fix hole length detection in ext4_ind_map_blocks()
Trond Myklebust trond.myklebust@primarydata.com NFSv4.1: Fix up replays of interrupted requests
Daniel Borkmann daniel@iogearbox.net tls: fix use-after-free in tls_push_record
Dexuan Cui decui@microsoft.com hv_netvsc: Fix a network regression after ifdown/ifup
Willem de Bruijn willemb@google.com net: in virtio_net_hdr only add VLAN_HLEN to csum_start if payload holds vlan
Paolo Abeni pabeni@redhat.com udp: fix rx queue len reported by diag and proc interface
Cong Wang xiyou.wangcong@gmail.com socket: close race condition between sock_close() and sockfs_setattr()
Frank van der Linden fllinden@amazon.com tcp: verify the checksum of the first data segment in a new connection
Davide Caratti dcaratti@redhat.com net/sched: act_simple: fix parsing of TCA_DEF_DATA
Zhouyang Jia jiazhouyang09@gmail.com net: dsa: add error handling for pskb_trim_rcsum
Julian Anastasov ja@ssi.bg ipv6: allow PMTU exceptions to local routes
Bjørn Mork bjorn@mork.no cdc_ncm: avoid padding beyond end of skb
Xiangning Yu yuxiangning@gmail.com bonding: re-evaluate force_primary when the primary slave name changes
-------------
Diffstat:
Makefile | 4 +- arch/x86/kernel/cpu/intel_rdt.c | 2 + arch/x86/kernel/cpu/mcheck/mce-inject.c | 2 +- block/blk-mq.c | 3 +- drivers/ata/libata-core.c | 3 - drivers/ata/libata-zpodd.c | 2 +- drivers/base/core.c | 14 ++- drivers/block/nbd.c | 17 ++- drivers/cpufreq/cpufreq.c | 2 + drivers/cpufreq/cpufreq_governor.c | 12 +- drivers/hid/intel-ish-hid/ipc/pci-ish.c | 22 ++-- drivers/hid/wacom_sys.c | 8 ++ drivers/net/bonding/bond_options.c | 1 + drivers/net/hyperv/netvsc_drv.c | 4 +- drivers/net/tap.c | 5 +- drivers/net/tun.c | 3 +- drivers/net/usb/cdc_ncm.c | 4 +- drivers/net/virtio_net.c | 3 +- drivers/net/wireless/intel/iwlwifi/fw/paging.c | 49 ++++++-- drivers/vhost/vhost.c | 3 + drivers/w1/masters/mxc_w1.c | 20 ++-- fs/binfmt_misc.c | 12 +- fs/btrfs/inode.c | 4 +- fs/btrfs/ioctl.c | 18 +-- fs/btrfs/scrub.c | 2 +- fs/cifs/cifsacl.h | 14 +++ fs/cifs/smb2ops.c | 68 ++++++++---- fs/cifs/smb2pdu.c | 4 +- fs/ext4/indirect.c | 14 ++- fs/ext4/inline.c | 6 + fs/ext4/inode.c | 46 ++++---- fs/ext4/resize.c | 2 +- fs/ext4/xattr.c | 2 +- fs/nfs/nfs4_fs.h | 2 +- fs/nfs/nfs4proc.c | 148 +++++++++++++++++-------- fs/orangefs/inode.c | 7 ++ fs/orangefs/namei.c | 7 ++ include/linux/virtio_net.h | 11 +- include/net/transp_v6.h | 11 +- include/net/udp.h | 5 + mm/backing-dev.c | 18 ++- mm/page_alloc.c | 1 - net/dsa/tag_trailer.c | 3 +- net/ipv4/tcp_ipv4.c | 4 + net/ipv4/udp.c | 2 +- net/ipv4/udp_diag.c | 2 +- net/ipv6/datagram.c | 6 +- net/ipv6/route.c | 3 - net/ipv6/tcp_ipv6.c | 4 + net/ipv6/udp.c | 3 +- net/packet/af_packet.c | 4 +- net/sched/act_simple.c | 15 +-- net/socket.c | 18 ++- net/tls/tls_sw.c | 26 ++--- sound/pci/hda/hda_controller.c | 4 +- sound/pci/hda/patch_conexant.c | 3 + sound/pci/hda/patch_realtek.c | 6 +- 57 files changed, 473 insertions(+), 215 deletions(-)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xiangning Yu yuxiangning@gmail.com
[ Upstream commit eb55bbf865d9979098c6a7a17cbdb41237ece951 ]
There is a timing issue under active-standy mode, when bond_enslave() is called, bond->params.primary might not be initialized yet.
Any time the primary slave string changes, bond->force_primary should be set to true to make sure the primary becomes the active slave.
Signed-off-by: Xiangning Yu yuxiangning@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/bonding/bond_options.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/net/bonding/bond_options.c +++ b/drivers/net/bonding/bond_options.c @@ -1142,6 +1142,7 @@ static int bond_option_primary_set(struc slave->dev->name); rcu_assign_pointer(bond->primary_slave, slave); strcpy(bond->params.primary, slave->dev->name); + bond->force_primary = true; bond_select_active_slave(bond); goto out; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Julian Anastasov ja@ssi.bg
[ Upstream commit 0975764684487bf3f7a47eef009e750ea41bd514 ]
IPVS setups with local client and remote tunnel server need to create exception for the local virtual IP. What we do is to change PMTU from 64KB (on "lo") to 1460 in the common case.
Suggested-by: Martin KaFai Lau kafai@fb.com Fixes: 45e4fd26683c ("ipv6: Only create RTF_CACHE routes after encountering pmtu exception") Fixes: 7343ff31ebf0 ("ipv6: Don't create clones of host routes.") Signed-off-by: Julian Anastasov ja@ssi.bg Acked-by: David Ahern dsahern@gmail.com Acked-by: Martin KaFai Lau kafai@fb.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv6/route.c | 3 --- 1 file changed, 3 deletions(-)
--- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -1476,9 +1476,6 @@ static void __ip6_rt_update_pmtu(struct const struct in6_addr *daddr, *saddr; struct rt6_info *rt6 = (struct rt6_info *)dst;
- if (rt6->rt6i_flags & RTF_LOCAL) - return; - if (dst_metric_locked(dst, RTAX_MTU)) return;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhouyang Jia jiazhouyang09@gmail.com
[ Upstream commit 349b71d6f427ff8211adf50839dbbff3f27c1805 ]
When pskb_trim_rcsum fails, the lack of error-handling code may cause unexpected results.
This patch adds error-handling code after calling pskb_trim_rcsum.
Signed-off-by: Zhouyang Jia jiazhouyang09@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/dsa/tag_trailer.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/net/dsa/tag_trailer.c +++ b/net/dsa/tag_trailer.c @@ -79,7 +79,8 @@ static struct sk_buff *trailer_rcv(struc if (unlikely(ds->cpu_port_mask & BIT(source_port))) return NULL;
- pskb_trim_rcsum(skb, skb->len - 4); + if (pskb_trim_rcsum(skb, skb->len - 4)) + return NULL;
skb->dev = ds->ports[source_port].netdev;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Davide Caratti dcaratti@redhat.com
[ Upstream commit 8d499533e0bc02d44283dbdab03142b599b8ba16 ]
use nla_strlcpy() to avoid copying data beyond the length of TCA_DEF_DATA netlink attribute, in case it is less than SIMP_MAX_DATA and it does not end with '\0' character.
v2: fix errors in the commit message, thanks Hangbin Liu
Fixes: fa1b1cff3d06 ("net_cls_act: Make act_simple use of netlink policy.") Signed-off-by: Davide Caratti dcaratti@redhat.com Reviewed-by: Simon Horman simon.horman@netronome.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/sched/act_simple.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-)
--- a/net/sched/act_simple.c +++ b/net/sched/act_simple.c @@ -53,22 +53,22 @@ static void tcf_simp_release(struct tc_a kfree(d->tcfd_defdata); }
-static int alloc_defdata(struct tcf_defact *d, char *defdata) +static int alloc_defdata(struct tcf_defact *d, const struct nlattr *defdata) { d->tcfd_defdata = kzalloc(SIMP_MAX_DATA, GFP_KERNEL); if (unlikely(!d->tcfd_defdata)) return -ENOMEM; - strlcpy(d->tcfd_defdata, defdata, SIMP_MAX_DATA); + nla_strlcpy(d->tcfd_defdata, defdata, SIMP_MAX_DATA); return 0; }
-static void reset_policy(struct tcf_defact *d, char *defdata, +static void reset_policy(struct tcf_defact *d, const struct nlattr *defdata, struct tc_defact *p) { spin_lock_bh(&d->tcf_lock); d->tcf_action = p->action; memset(d->tcfd_defdata, 0, SIMP_MAX_DATA); - strlcpy(d->tcfd_defdata, defdata, SIMP_MAX_DATA); + nla_strlcpy(d->tcfd_defdata, defdata, SIMP_MAX_DATA); spin_unlock_bh(&d->tcf_lock); }
@@ -87,7 +87,6 @@ static int tcf_simp_init(struct net *net struct tcf_defact *d; bool exists = false; int ret = 0, err; - char *defdata;
if (nla == NULL) return -EINVAL; @@ -110,8 +109,6 @@ static int tcf_simp_init(struct net *net return -EINVAL; }
- defdata = nla_data(tb[TCA_DEF_DATA]); - if (!exists) { ret = tcf_idr_create(tn, parm->index, est, a, &act_simp_ops, bind, false); @@ -119,7 +116,7 @@ static int tcf_simp_init(struct net *net return ret;
d = to_defact(*a); - ret = alloc_defdata(d, defdata); + ret = alloc_defdata(d, tb[TCA_DEF_DATA]); if (ret < 0) { tcf_idr_release(*a, bind); return ret; @@ -133,7 +130,7 @@ static int tcf_simp_init(struct net *net if (!ovr) return -EEXIST;
- reset_policy(d, defdata, parm); + reset_policy(d, tb[TCA_DEF_DATA], parm); }
if (ret == ACT_P_CREATED)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Frank van der Linden fllinden@amazon.com
[ Upstream commit 4fd44a98ffe0d048246efef67ed640fdf2098a62 ]
commit 079096f103fa ("tcp/dccp: install syn_recv requests into ehash table") introduced an optimization for the handling of child sockets created for a new TCP connection.
But this optimization passes any data associated with the last ACK of the connection handshake up the stack without verifying its checksum, because it calls tcp_child_process(), which in turn calls tcp_rcv_state_process() directly. These lower-level processing functions do not do any checksum verification.
Insert a tcp_checksum_complete call in the TCP_NEW_SYN_RECEIVE path to fix this.
Fixes: 079096f103fa ("tcp/dccp: install syn_recv requests into ehash table") Signed-off-by: Frank van der Linden fllinden@amazon.com Signed-off-by: Eric Dumazet edumazet@google.com Tested-by: Balbir Singh bsingharora@gmail.com Reviewed-by: Balbir Singh bsingharora@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/tcp_ipv4.c | 4 ++++ net/ipv6/tcp_ipv6.c | 4 ++++ 2 files changed, 8 insertions(+)
--- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1675,6 +1675,10 @@ process: reqsk_put(req); goto discard_it; } + if (tcp_checksum_complete(skb)) { + reqsk_put(req); + goto csum_error; + } if (unlikely(sk->sk_state != TCP_LISTEN)) { inet_csk_reqsk_queue_drop_and_put(sk, req); goto lookup; --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -1453,6 +1453,10 @@ process: reqsk_put(req); goto discard_it; } + if (tcp_checksum_complete(skb)) { + reqsk_put(req); + goto csum_error; + } if (unlikely(sk->sk_state != TCP_LISTEN)) { inet_csk_reqsk_queue_drop_and_put(sk, req); goto lookup;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Cong Wang xiyou.wangcong@gmail.com
[ Upstream commit 6d8c50dcb029872b298eea68cc6209c866fd3e14 ]
fchownat() doesn't even hold refcnt of fd until it figures out fd is really needed (otherwise is ignored) and releases it after it resolves the path. This means sock_close() could race with sockfs_setattr(), which leads to a NULL pointer dereference since typically we set sock->sk to NULL in ->release().
As pointed out by Al, this is unique to sockfs. So we can fix this in socket layer by acquiring inode_lock in sock_close() and checking against NULL in sockfs_setattr().
sock_release() is called in many places, only the sock_close() path matters here. And fortunately, this should not affect normal sock_close() as it is only called when the last fd refcnt is gone. It only affects sock_close() with a parallel sockfs_setattr() in progress, which is not common.
Fixes: 86741ec25462 ("net: core: Add a UID field to struct sock.") Reported-by: shankarapailoor shankarapailoor@gmail.com Cc: Tetsuo Handa penguin-kernel@i-love.sakura.ne.jp Cc: Lorenzo Colitti lorenzo@google.com Cc: Al Viro viro@zeniv.linux.org.uk Signed-off-by: Cong Wang xiyou.wangcong@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/socket.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-)
--- a/net/socket.c +++ b/net/socket.c @@ -538,7 +538,10 @@ static int sockfs_setattr(struct dentry if (!err && (iattr->ia_valid & ATTR_UID)) { struct socket *sock = SOCKET_I(d_inode(dentry));
- sock->sk->sk_uid = iattr->ia_uid; + if (sock->sk) + sock->sk->sk_uid = iattr->ia_uid; + else + err = -ENOENT; }
return err; @@ -588,12 +591,16 @@ EXPORT_SYMBOL(sock_alloc); * an inode not a file. */
-void sock_release(struct socket *sock) +static void __sock_release(struct socket *sock, struct inode *inode) { if (sock->ops) { struct module *owner = sock->ops->owner;
+ if (inode) + inode_lock(inode); sock->ops->release(sock); + if (inode) + inode_unlock(inode); sock->ops = NULL; module_put(owner); } @@ -608,6 +615,11 @@ void sock_release(struct socket *sock) } sock->file = NULL; } + +void sock_release(struct socket *sock) +{ + __sock_release(sock, NULL); +} EXPORT_SYMBOL(sock_release);
void __sock_tx_timestamp(__u16 tsflags, __u8 *tx_flags) @@ -1122,7 +1134,7 @@ static int sock_mmap(struct file *file,
static int sock_close(struct inode *inode, struct file *filp) { - sock_release(SOCKET_I(inode)); + __sock_release(SOCKET_I(inode), inode); return 0; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paolo Abeni pabeni@redhat.com
[ Upstream commit 6c206b20092a3623184cff9470dba75d21507874 ]
After commit 6b229cf77d68 ("udp: add batching to udp_rmem_release()") the sk_rmem_alloc field does not measure exactly anymore the receive queue length, because we batch the rmem release. The issue is really apparent only after commit 0d4a6608f68c ("udp: do rmem bulk free even if the rx sk queue is empty"): the user space can easily check for an empty socket with not-0 queue length reported by the 'ss' tool or the procfs interface.
We need to use a custom UDP helper to report the correct queue length, taking into account the forward allocation deficit.
Reported-by: trevor.francis@46labs.com Fixes: 6b229cf77d68 ("UDP: add batching to udp_rmem_release()") Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/net/transp_v6.h | 11 +++++++++-- include/net/udp.h | 5 +++++ net/ipv4/udp.c | 2 +- net/ipv4/udp_diag.c | 2 +- net/ipv6/datagram.c | 6 +++--- net/ipv6/udp.c | 3 ++- 6 files changed, 21 insertions(+), 8 deletions(-)
--- a/include/net/transp_v6.h +++ b/include/net/transp_v6.h @@ -45,8 +45,15 @@ int ip6_datagram_send_ctl(struct net *ne struct flowi6 *fl6, struct ipcm6_cookie *ipc6, struct sockcm_cookie *sockc);
-void ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp, - __u16 srcp, __u16 destp, int bucket); +void __ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp, + __u16 srcp, __u16 destp, int rqueue, int bucket); +static inline void +ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp, __u16 srcp, + __u16 destp, int bucket) +{ + __ip6_dgram_sock_seq_show(seq, sp, srcp, destp, sk_rmem_alloc_get(sp), + bucket); +}
#define LOOPBACK4_IPV6 cpu_to_be32(0x7f000006)
--- a/include/net/udp.h +++ b/include/net/udp.h @@ -244,6 +244,11 @@ static inline __be16 udp_flow_src_port(s return htons((((u64) hash * (max - min)) >> 32) + min); }
+static inline int udp_rqueue_get(struct sock *sk) +{ + return sk_rmem_alloc_get(sk) - READ_ONCE(udp_sk(sk)->forward_deficit); +} + /* net/ipv4/udp.c */ void udp_destruct_sock(struct sock *sk); void skb_consume_udp(struct sock *sk, struct sk_buff *skb, int len); --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -2720,7 +2720,7 @@ static void udp4_format_sock(struct sock " %02X %08X:%08X %02X:%08lX %08X %5u %8d %lu %d %pK %d", bucket, src, srcp, dest, destp, sp->sk_state, sk_wmem_alloc_get(sp), - sk_rmem_alloc_get(sp), + udp_rqueue_get(sp), 0, 0L, 0, from_kuid_munged(seq_user_ns(f), sock_i_uid(sp)), 0, sock_i_ino(sp), --- a/net/ipv4/udp_diag.c +++ b/net/ipv4/udp_diag.c @@ -163,7 +163,7 @@ static int udp_diag_dump_one(struct sk_b static void udp_diag_get_info(struct sock *sk, struct inet_diag_msg *r, void *info) { - r->idiag_rqueue = sk_rmem_alloc_get(sk); + r->idiag_rqueue = udp_rqueue_get(sk); r->idiag_wqueue = sk_wmem_alloc_get(sk); }
--- a/net/ipv6/datagram.c +++ b/net/ipv6/datagram.c @@ -1026,8 +1026,8 @@ exit_f: } EXPORT_SYMBOL_GPL(ip6_datagram_send_ctl);
-void ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp, - __u16 srcp, __u16 destp, int bucket) +void __ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp, + __u16 srcp, __u16 destp, int rqueue, int bucket) { const struct in6_addr *dest, *src;
@@ -1043,7 +1043,7 @@ void ip6_dgram_sock_seq_show(struct seq_ dest->s6_addr32[2], dest->s6_addr32[3], destp, sp->sk_state, sk_wmem_alloc_get(sp), - sk_rmem_alloc_get(sp), + rqueue, 0, 0L, 0, from_kuid_munged(seq_user_ns(seq), sock_i_uid(sp)), 0, --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -1503,7 +1503,8 @@ int udp6_seq_show(struct seq_file *seq, struct inet_sock *inet = inet_sk(v); __u16 srcp = ntohs(inet->inet_sport); __u16 destp = ntohs(inet->inet_dport); - ip6_dgram_sock_seq_show(seq, v, srcp, destp, bucket); + __ip6_dgram_sock_seq_show(seq, v, srcp, destp, + udp_rqueue_get(v), bucket); } return 0; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Willem de Bruijn willemb@google.com
[ Upstream commit fd3a88625844907151737fc3b4201676effa6d27 ]
Tun, tap, virtio, packet and uml vector all use struct virtio_net_hdr to communicate packet metadata to userspace.
For skbuffs with vlan, the first two return the packet as it may have existed on the wire, inserting the VLAN tag in the user buffer. Then virtio_net_hdr.csum_start needs to be adjusted by VLAN_HLEN bytes.
Commit f09e2249c4f5 ("macvtap: restore vlan header on user read") added this feature to macvtap. Commit 3ce9b20f1971 ("macvtap: Fix csum_start when VLAN tags are present") then fixed up csum_start.
Virtio, packet and uml do not insert the vlan header in the user buffer.
When introducing virtio_net_hdr_from_skb to deduplicate filling in the virtio_net_hdr, the variant from macvtap which adds VLAN_HLEN was applied uniformly, breaking csum offset for packets with vlan on virtio and packet.
Make insertion of VLAN_HLEN optional. Convert the callers to pass it when needed.
Fixes: e858fae2b0b8f4 ("virtio_net: use common code for virtio_net_hdr and skb GSO conversion") Fixes: 1276f24eeef2 ("packet: use common code for virtio_net_hdr and skb GSO conversion") Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/tap.c | 5 ++++- drivers/net/tun.c | 3 ++- drivers/net/virtio_net.c | 3 ++- include/linux/virtio_net.h | 11 ++++------- net/packet/af_packet.c | 4 ++-- 5 files changed, 14 insertions(+), 12 deletions(-)
--- a/drivers/net/tap.c +++ b/drivers/net/tap.c @@ -777,13 +777,16 @@ static ssize_t tap_put_user(struct tap_q int total;
if (q->flags & IFF_VNET_HDR) { + int vlan_hlen = skb_vlan_tag_present(skb) ? VLAN_HLEN : 0; struct virtio_net_hdr vnet_hdr; + vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz); if (iov_iter_count(iter) < vnet_hdr_len) return -EINVAL;
if (virtio_net_hdr_from_skb(skb, &vnet_hdr, - tap_is_little_endian(q), true)) + tap_is_little_endian(q), true, + vlan_hlen)) BUG();
if (copy_to_iter(&vnet_hdr, sizeof(vnet_hdr), iter) != --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1648,7 +1648,8 @@ static ssize_t tun_put_user(struct tun_s return -EINVAL;
if (virtio_net_hdr_from_skb(skb, &gso, - tun_is_little_endian(tun), true)) { + tun_is_little_endian(tun), true, + vlan_hlen)) { struct skb_shared_info *sinfo = skb_shinfo(skb); pr_err("unexpected GSO type: " "0x%x, gso_size %d, hdr_len %d\n", --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1237,7 +1237,8 @@ static int xmit_skb(struct send_queue *s hdr = skb_vnet_hdr(skb);
if (virtio_net_hdr_from_skb(skb, &hdr->hdr, - virtio_is_little_endian(vi->vdev), false)) + virtio_is_little_endian(vi->vdev), false, + 0)) BUG();
if (vi->mergeable_rx_bufs) --- a/include/linux/virtio_net.h +++ b/include/linux/virtio_net.h @@ -58,7 +58,8 @@ static inline int virtio_net_hdr_to_skb( static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb, struct virtio_net_hdr *hdr, bool little_endian, - bool has_data_valid) + bool has_data_valid, + int vlan_hlen) { memset(hdr, 0, sizeof(*hdr)); /* no info leak */
@@ -83,12 +84,8 @@ static inline int virtio_net_hdr_from_sk
if (skb->ip_summed == CHECKSUM_PARTIAL) { hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; - if (skb_vlan_tag_present(skb)) - hdr->csum_start = __cpu_to_virtio16(little_endian, - skb_checksum_start_offset(skb) + VLAN_HLEN); - else - hdr->csum_start = __cpu_to_virtio16(little_endian, - skb_checksum_start_offset(skb)); + hdr->csum_start = __cpu_to_virtio16(little_endian, + skb_checksum_start_offset(skb) + vlan_hlen); hdr->csum_offset = __cpu_to_virtio16(little_endian, skb->csum_offset); } else if (has_data_valid && --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -2046,7 +2046,7 @@ static int packet_rcv_vnet(struct msghdr return -EINVAL; *len -= sizeof(vnet_hdr);
- if (virtio_net_hdr_from_skb(skb, &vnet_hdr, vio_le(), true)) + if (virtio_net_hdr_from_skb(skb, &vnet_hdr, vio_le(), true, 0)) return -EINVAL;
return memcpy_to_msg(msg, (void *)&vnet_hdr, sizeof(vnet_hdr)); @@ -2313,7 +2313,7 @@ static int tpacket_rcv(struct sk_buff *s if (do_vnet) { if (virtio_net_hdr_from_skb(skb, h.raw + macoff - sizeof(struct virtio_net_hdr), - vio_le(), true)) { + vio_le(), true, 0)) { spin_lock(&sk->sk_receive_queue.lock); goto drop_n_account; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dexuan Cui decui@microsoft.com
[ Upstream commit 52acf73b6e9a6962045feb2ba5a8921da2201915 ]
Recently people reported the NIC stops working after "ifdown eth0; ifup eth0". It turns out in this case the TX queues are not enabled, after the refactoring of the common detach logic: when the NIC has sub-channels, usually we enable all the TX queues after all sub-channels are set up: see rndis_set_subchannel() -> netif_device_attach(), but in the case of "ifdown eth0; ifup eth0" where the number of channels doesn't change, we also must make sure the TX queues are enabled. The patch fixes the regression.
Fixes: 7b2ee50c0cd5 ("hv_netvsc: common detach logic") Signed-off-by: Dexuan Cui decui@microsoft.com Cc: Stephen Hemminger sthemmin@microsoft.com Cc: K. Y. Srinivasan kys@microsoft.com Cc: Haiyang Zhang haiyangz@microsoft.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/hyperv/netvsc_drv.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -123,8 +123,10 @@ static int netvsc_open(struct net_device }
rdev = nvdev->extension; - if (!rdev->link_state) + if (!rdev->link_state) { netif_carrier_on(net); + netif_tx_wake_all_queues(net); + }
if (vf_netdev) { /* Setting synthetic device up transparently sets
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Borkmann daniel@iogearbox.net
[ Upstream commit a447da7d00410278c90d3576782a43f8b675d7be ]
syzkaller managed to trigger a use-after-free in tls like the following:
BUG: KASAN: use-after-free in tls_push_record.constprop.15+0x6a2/0x810 [tls] Write of size 1 at addr ffff88037aa08000 by task a.out/2317
CPU: 3 PID: 2317 Comm: a.out Not tainted 4.17.0+ #144 Hardware name: LENOVO 20FBCTO1WW/20FBCTO1WW, BIOS N1FET47W (1.21 ) 11/28/2016 Call Trace: dump_stack+0x71/0xab print_address_description+0x6a/0x280 kasan_report+0x258/0x380 ? tls_push_record.constprop.15+0x6a2/0x810 [tls] tls_push_record.constprop.15+0x6a2/0x810 [tls] tls_sw_push_pending_record+0x2e/0x40 [tls] tls_sk_proto_close+0x3fe/0x710 [tls] ? tcp_check_oom+0x4c0/0x4c0 ? tls_write_space+0x260/0x260 [tls] ? kmem_cache_free+0x88/0x1f0 inet_release+0xd6/0x1b0 __sock_release+0xc0/0x240 sock_close+0x11/0x20 __fput+0x22d/0x660 task_work_run+0x114/0x1a0 do_exit+0x71a/0x2780 ? mm_update_next_owner+0x650/0x650 ? handle_mm_fault+0x2f5/0x5f0 ? __do_page_fault+0x44f/0xa50 ? mm_fault_error+0x2d0/0x2d0 do_group_exit+0xde/0x300 __x64_sys_exit_group+0x3a/0x50 do_syscall_64+0x9a/0x300 ? page_fault+0x8/0x30 entry_SYSCALL_64_after_hwframe+0x44/0xa9
This happened through fault injection where aead_req allocation in tls_do_encryption() eventually failed and we returned -ENOMEM from the function. Turns out that the use-after-free is triggered from tls_sw_sendmsg() in the second tls_push_record(). The error then triggers a jump to waiting for memory in sk_stream_wait_memory() resp. returning immediately in case of MSG_DONTWAIT. What follows is the trim_both_sgl(sk, orig_size), which drops elements from the sg list added via tls_sw_sendmsg(). Now the use-after-free gets triggered when the socket is being closed, where tls_sk_proto_close() callback is invoked. The tls_complete_pending_work() will figure that there's a pending closed tls record to be flushed and thus calls into the tls_push_pending_closed_record() from there. ctx->push_pending_record() is called from the latter, which is the tls_sw_push_pending_record() from sw path. This again calls into tls_push_record(). And here the tls_fill_prepend() will panic since the buffer address has been freed earlier via trim_both_sgl(). One way to fix it is to move the aead request allocation out of tls_do_encryption() early into tls_push_record(). This means we don't prep the tls header and advance state to the TLS_PENDING_CLOSED_RECORD before allocation which could potentially fail happened. That fixes the issue on my side.
Fixes: 3c4d7559159b ("tls: kernel TLS support") Reported-by: syzbot+5c74af81c547738e1684@syzkaller.appspotmail.com Reported-by: syzbot+709f2810a6a05f11d4d3@syzkaller.appspotmail.com Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Dave Watson davejwatson@fb.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/tls/tls_sw.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-)
--- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -211,18 +211,12 @@ static void tls_free_both_sg(struct sock }
static int tls_do_encryption(struct tls_context *tls_ctx, - struct tls_sw_context *ctx, size_t data_len, - gfp_t flags) + struct tls_sw_context *ctx, + struct aead_request *aead_req, + size_t data_len) { - unsigned int req_size = sizeof(struct aead_request) + - crypto_aead_reqsize(ctx->aead_send); - struct aead_request *aead_req; int rc;
- aead_req = kzalloc(req_size, flags); - if (!aead_req) - return -ENOMEM; - ctx->sg_encrypted_data[0].offset += tls_ctx->prepend_size; ctx->sg_encrypted_data[0].length -= tls_ctx->prepend_size;
@@ -235,7 +229,6 @@ static int tls_do_encryption(struct tls_ ctx->sg_encrypted_data[0].offset -= tls_ctx->prepend_size; ctx->sg_encrypted_data[0].length += tls_ctx->prepend_size;
- kfree(aead_req); return rc; }
@@ -244,8 +237,14 @@ static int tls_push_record(struct sock * { struct tls_context *tls_ctx = tls_get_ctx(sk); struct tls_sw_context *ctx = tls_sw_ctx(tls_ctx); + struct aead_request *req; int rc;
+ req = kzalloc(sizeof(struct aead_request) + + crypto_aead_reqsize(ctx->aead_send), sk->sk_allocation); + if (!req) + return -ENOMEM; + sg_mark_end(ctx->sg_plaintext_data + ctx->sg_plaintext_num_elem - 1); sg_mark_end(ctx->sg_encrypted_data + ctx->sg_encrypted_num_elem - 1);
@@ -261,15 +260,14 @@ static int tls_push_record(struct sock * tls_ctx->pending_open_record_frags = 0; set_bit(TLS_PENDING_CLOSED_RECORD, &tls_ctx->flags);
- rc = tls_do_encryption(tls_ctx, ctx, ctx->sg_plaintext_size, - sk->sk_allocation); + rc = tls_do_encryption(tls_ctx, ctx, req, ctx->sg_plaintext_size); if (rc < 0) { /* If we are called from write_space and * we fail, we need to set this SOCK_NOSPACE * to trigger another write_space in the future. */ set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); - return rc; + goto out_req; }
free_sg(sk, ctx->sg_plaintext_data, &ctx->sg_plaintext_num_elem, @@ -284,6 +282,8 @@ static int tls_push_record(struct sock * tls_err_abort(sk);
tls_advance_record_sn(sk, tls_ctx); +out_req: + kfree(req); return rc; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@primarydata.com
commit 3be0f80b5fe9c16eca2d538f799b94ca8aa59433 upstream.
If the previous request on a slot was interrupted before it was processed by the server, then our slot sequence number may be out of whack, and so we try the next operation using the old sequence number.
The problem with this, is that not all servers check to see that the client is replaying the same operations as previously when they decide to go to the replay cache, and so instead of the expected error of NFS4ERR_SEQ_FALSE_RETRY, we get a replay of the old reply, which could (if the operations match up) be mistaken by the client for a new reply.
To fix this, we attempt to send a COMPOUND containing only the SEQUENCE op in order to resync our slot sequence number.
Cc: Olga Kornievskaia olga.kornievskaia@gmail.com [olga.kornievskaia@gmail.com: fix an Oops] Signed-off-by: Trond Myklebust trond.myklebust@primarydata.com Signed-off-by: Anna Schumaker Anna.Schumaker@Netapp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfs/nfs4_fs.h | 2 fs/nfs/nfs4proc.c | 150 +++++++++++++++++++++++++++++++++++++----------------- 2 files changed, 104 insertions(+), 48 deletions(-)
--- a/fs/nfs/nfs4_fs.h +++ b/fs/nfs/nfs4_fs.h @@ -465,7 +465,7 @@ extern void nfs_increment_open_seqid(int extern void nfs_increment_lock_seqid(int status, struct nfs_seqid *seqid); extern void nfs_release_seqid(struct nfs_seqid *seqid); extern void nfs_free_seqid(struct nfs_seqid *seqid); -extern int nfs4_setup_sequence(const struct nfs_client *client, +extern int nfs4_setup_sequence(struct nfs_client *client, struct nfs4_sequence_args *args, struct nfs4_sequence_res *res, struct rpc_task *task); --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -96,6 +96,10 @@ static int nfs4_do_setattr(struct inode struct nfs_open_context *ctx, struct nfs4_label *ilabel, struct nfs4_label *olabel); #ifdef CONFIG_NFS_V4_1 +static struct rpc_task *_nfs41_proc_sequence(struct nfs_client *clp, + struct rpc_cred *cred, + struct nfs4_slot *slot, + bool is_privileged); static int nfs41_test_stateid(struct nfs_server *, nfs4_stateid *, struct rpc_cred *); static int nfs41_free_stateid(struct nfs_server *, const nfs4_stateid *, @@ -641,13 +645,14 @@ static int nfs40_sequence_done(struct rp
#if defined(CONFIG_NFS_V4_1)
-static void nfs41_sequence_free_slot(struct nfs4_sequence_res *res) +static void nfs41_release_slot(struct nfs4_slot *slot) { struct nfs4_session *session; struct nfs4_slot_table *tbl; - struct nfs4_slot *slot = res->sr_slot; bool send_new_highest_used_slotid = false;
+ if (!slot) + return; tbl = slot->table; session = tbl->session;
@@ -673,13 +678,18 @@ static void nfs41_sequence_free_slot(str send_new_highest_used_slotid = false; out_unlock: spin_unlock(&tbl->slot_tbl_lock); - res->sr_slot = NULL; if (send_new_highest_used_slotid) nfs41_notify_server(session->clp); if (waitqueue_active(&tbl->slot_waitq)) wake_up_all(&tbl->slot_waitq); }
+static void nfs41_sequence_free_slot(struct nfs4_sequence_res *res) +{ + nfs41_release_slot(res->sr_slot); + res->sr_slot = NULL; +} + static int nfs41_sequence_process(struct rpc_task *task, struct nfs4_sequence_res *res) { @@ -707,13 +717,6 @@ static int nfs41_sequence_process(struct /* Check the SEQUENCE operation status */ switch (res->sr_status) { case 0: - /* If previous op on slot was interrupted and we reused - * the seq# and got a reply from the cache, then retry - */ - if (task->tk_status == -EREMOTEIO && interrupted) { - ++slot->seq_nr; - goto retry_nowait; - } /* Update the slot's sequence and clientid lease timer */ slot->seq_done = 1; clp = session->clp; @@ -747,16 +750,16 @@ static int nfs41_sequence_process(struct * The slot id we used was probably retired. Try again * using a different slot id. */ + if (slot->seq_nr < slot->table->target_highest_slotid) + goto session_recover; goto retry_nowait; case -NFS4ERR_SEQ_MISORDERED: /* * Was the last operation on this sequence interrupted? * If so, retry after bumping the sequence number. */ - if (interrupted) { - ++slot->seq_nr; - goto retry_nowait; - } + if (interrupted) + goto retry_new_seq; /* * Could this slot have been previously retired? * If so, then the server may be expecting seq_nr = 1! @@ -765,10 +768,11 @@ static int nfs41_sequence_process(struct slot->seq_nr = 1; goto retry_nowait; } - break; + goto session_recover; case -NFS4ERR_SEQ_FALSE_RETRY: - ++slot->seq_nr; - goto retry_nowait; + if (interrupted) + goto retry_new_seq; + goto session_recover; default: /* Just update the slot sequence no. */ slot->seq_done = 1; @@ -778,6 +782,11 @@ out: dprintk("%s: Error %d free the slot \n", __func__, res->sr_status); out_noaction: return ret; +session_recover: + nfs4_schedule_session_recovery(session, res->sr_status); + goto retry_nowait; +retry_new_seq: + ++slot->seq_nr; retry_nowait: if (rpc_restart_call_prepare(task)) { nfs41_sequence_free_slot(res); @@ -854,6 +863,17 @@ static const struct rpc_call_ops nfs41_c .rpc_call_done = nfs41_call_sync_done, };
+static void +nfs4_sequence_process_interrupted(struct nfs_client *client, + struct nfs4_slot *slot, struct rpc_cred *cred) +{ + struct rpc_task *task; + + task = _nfs41_proc_sequence(client, cred, slot, true); + if (!IS_ERR(task)) + rpc_put_task_async(task); +} + #else /* !CONFIG_NFS_V4_1 */
static int nfs4_sequence_process(struct rpc_task *task, struct nfs4_sequence_res *res) @@ -874,9 +894,34 @@ int nfs4_sequence_done(struct rpc_task * } EXPORT_SYMBOL_GPL(nfs4_sequence_done);
+static void +nfs4_sequence_process_interrupted(struct nfs_client *client, + struct nfs4_slot *slot, struct rpc_cred *cred) +{ + WARN_ON_ONCE(1); + slot->interrupted = 0; +} + #endif /* !CONFIG_NFS_V4_1 */
-int nfs4_setup_sequence(const struct nfs_client *client, +static +void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args, + struct nfs4_sequence_res *res, + struct nfs4_slot *slot) +{ + if (!slot) + return; + slot->privileged = args->sa_privileged ? 1 : 0; + args->sa_slot = slot; + + res->sr_slot = slot; + res->sr_timestamp = jiffies; + res->sr_status_flags = 0; + res->sr_status = 1; + +} + +int nfs4_setup_sequence(struct nfs_client *client, struct nfs4_sequence_args *args, struct nfs4_sequence_res *res, struct rpc_task *task) @@ -894,30 +939,29 @@ int nfs4_setup_sequence(const struct nfs task->tk_timeout = 0; }
- spin_lock(&tbl->slot_tbl_lock); - /* The state manager will wait until the slot table is empty */ - if (nfs4_slot_tbl_draining(tbl) && !args->sa_privileged) - goto out_sleep; - - slot = nfs4_alloc_slot(tbl); - if (IS_ERR(slot)) { - /* Try again in 1/4 second */ - if (slot == ERR_PTR(-ENOMEM)) - task->tk_timeout = HZ >> 2; - goto out_sleep; - } - spin_unlock(&tbl->slot_tbl_lock); - - slot->privileged = args->sa_privileged ? 1 : 0; - args->sa_slot = slot; + for (;;) { + spin_lock(&tbl->slot_tbl_lock); + /* The state manager will wait until the slot table is empty */ + if (nfs4_slot_tbl_draining(tbl) && !args->sa_privileged) + goto out_sleep; + + slot = nfs4_alloc_slot(tbl); + if (IS_ERR(slot)) { + /* Try again in 1/4 second */ + if (slot == ERR_PTR(-ENOMEM)) + task->tk_timeout = HZ >> 2; + goto out_sleep; + } + spin_unlock(&tbl->slot_tbl_lock);
- res->sr_slot = slot; - if (session) { - res->sr_timestamp = jiffies; - res->sr_status_flags = 0; - res->sr_status = 1; + if (likely(!slot->interrupted)) + break; + nfs4_sequence_process_interrupted(client, + slot, task->tk_msg.rpc_cred); }
+ nfs4_sequence_attach_slot(args, res, slot); + trace_nfs4_setup_sequence(session, args); out_start: rpc_call_start(task); @@ -8151,6 +8195,7 @@ static const struct rpc_call_ops nfs41_s
static struct rpc_task *_nfs41_proc_sequence(struct nfs_client *clp, struct rpc_cred *cred, + struct nfs4_slot *slot, bool is_privileged) { struct nfs4_sequence_data *calldata; @@ -8164,15 +8209,18 @@ static struct rpc_task *_nfs41_proc_sequ .callback_ops = &nfs41_sequence_ops, .flags = RPC_TASK_ASYNC | RPC_TASK_TIMEOUT, }; + struct rpc_task *ret;
+ ret = ERR_PTR(-EIO); if (!atomic_inc_not_zero(&clp->cl_count)) - return ERR_PTR(-EIO); + goto out_err; + + ret = ERR_PTR(-ENOMEM); calldata = kzalloc(sizeof(*calldata), GFP_NOFS); - if (calldata == NULL) { - nfs_put_client(clp); - return ERR_PTR(-ENOMEM); - } + if (calldata == NULL) + goto out_put_clp; nfs4_init_sequence(&calldata->args, &calldata->res, 0); + nfs4_sequence_attach_slot(&calldata->args, &calldata->res, slot); if (is_privileged) nfs4_set_sequence_privileged(&calldata->args); msg.rpc_argp = &calldata->args; @@ -8180,7 +8228,15 @@ static struct rpc_task *_nfs41_proc_sequ calldata->clp = clp; task_setup_data.callback_data = calldata;
- return rpc_run_task(&task_setup_data); + ret = rpc_run_task(&task_setup_data); + if (IS_ERR(ret)) + goto out_err; + return ret; +out_put_clp: + nfs_put_client(clp); +out_err: + nfs41_release_slot(slot); + return ret; }
static int nfs41_proc_async_sequence(struct nfs_client *clp, struct rpc_cred *cred, unsigned renew_flags) @@ -8190,7 +8246,7 @@ static int nfs41_proc_async_sequence(str
if ((renew_flags & NFS4_RENEW_TIMEOUT) == 0) return -EAGAIN; - task = _nfs41_proc_sequence(clp, cred, false); + task = _nfs41_proc_sequence(clp, cred, NULL, false); if (IS_ERR(task)) ret = PTR_ERR(task); else @@ -8204,7 +8260,7 @@ static int nfs4_proc_sequence(struct nfs struct rpc_task *task; int ret;
- task = _nfs41_proc_sequence(clp, cred, true); + task = _nfs41_proc_sequence(clp, cred, NULL, true); if (IS_ERR(task)) { ret = PTR_ERR(task); goto out;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Kara jack@suse.cz
commit 2ee3ee06a8fd792765fa3267ddf928997797eec5 upstream.
When ext4_ind_map_blocks() computes a length of a hole, it doesn't count with the fact that mapped offset may be somewhere in the middle of the completely empty subtree. In such case it will return too large length of the hole which then results in lseek(SEEK_DATA) to end up returning an incorrect offset beyond the end of the hole.
Fix the problem by correctly taking offset within a subtree into account when computing a length of a hole.
Fixes: facab4d9711e7aa3532cb82643803e8f1b9518e8 CC: stable@vger.kernel.org Reported-by: Jeff Mahoney jeffm@suse.com Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/indirect.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-)
--- a/fs/ext4/indirect.c +++ b/fs/ext4/indirect.c @@ -561,10 +561,16 @@ int ext4_ind_map_blocks(handle_t *handle unsigned epb = inode->i_sb->s_blocksize / sizeof(u32); int i;
- /* Count number blocks in a subtree under 'partial' */ - count = 1; - for (i = 0; partial + i != chain + depth - 1; i++) - count *= epb; + /* + * Count number blocks in a subtree under 'partial'. At each + * level we count number of complete empty subtrees beyond + * current offset and then descend into the subtree only + * partially beyond current offset. + */ + count = 0; + for (i = partial - chain + 1; i < depth; i++) + count = count * epb + (epb - offsets[i] - 1); + count++; /* Fill in size of a hole we found */ map->m_pblk = 0; map->m_len = min_t(unsigned int, map->m_len, count);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lukas Czerner lczerner@redhat.com
commit eee597ac931305eff3d3fd1d61d6aae553bc0984 upstream.
Currently in ext4_punch_hole we're going to skip the mtime update if there are no actual blocks to release. However we've actually modified the file by zeroing the partial block so the mtime should be updated.
Moreover the sync and datasync handling is skipped as well, which is also wrong. Fix it.
Signed-off-by: Lukas Czerner lczerner@redhat.com Signed-off-by: Theodore Ts'o tytso@mit.edu Reported-by: Joe Habermann joe.habermann@quantum.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/inode.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-)
--- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4246,28 +4246,28 @@ int ext4_punch_hole(struct inode *inode, EXT4_BLOCK_SIZE_BITS(sb); stop_block = (offset + length) >> EXT4_BLOCK_SIZE_BITS(sb);
- /* If there are no blocks to remove, return now */ - if (first_block >= stop_block) - goto out_stop; - - down_write(&EXT4_I(inode)->i_data_sem); - ext4_discard_preallocations(inode); - - ret = ext4_es_remove_extent(inode, first_block, - stop_block - first_block); - if (ret) { - up_write(&EXT4_I(inode)->i_data_sem); - goto out_stop; - } + /* If there are blocks to remove, do it */ + if (stop_block > first_block) { + + down_write(&EXT4_I(inode)->i_data_sem); + ext4_discard_preallocations(inode);
- if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) - ret = ext4_ext_remove_space(inode, first_block, - stop_block - 1); - else - ret = ext4_ind_remove_space(handle, inode, first_block, - stop_block); + ret = ext4_es_remove_extent(inode, first_block, + stop_block - first_block); + if (ret) { + up_write(&EXT4_I(inode)->i_data_sem); + goto out_stop; + } + + if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) + ret = ext4_ext_remove_space(inode, first_block, + stop_block - 1); + else + ret = ext4_ind_remove_space(handle, inode, first_block, + stop_block);
- up_write(&EXT4_I(inode)->i_data_sem); + up_write(&EXT4_I(inode)->i_data_sem); + } if (IS_SYNC(inode)) ext4_handle_sync(handle);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Theodore Ts'o tytso@mit.edu
commit 117166efb1ee8f13c38f9e96b258f16d4923f888 upstream.
The inline data feature was implemented before we added support for external inodes for xattrs. It makes no sense to support that combination, but the problem is that there are a number of extended attribute checks that are skipped if e_value_inum is non-zero.
Unfortunately, the inline data code is completely e_value_inum unaware, and attempts to interpret the xattr fields as if it were an inline xattr --- at which point, Hilarty Ensues.
This addresses CVE-2018-11412.
https://bugzilla.kernel.org/show_bug.cgi?id=199803
Reported-by: Jann Horn jannh@google.com Reviewed-by: Andreas Dilger adilger@dilger.ca Signed-off-by: Theodore Ts'o tytso@mit.edu Fixes: e50e5129f384 ("ext4: xattr-in-inode support") Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/inline.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -150,6 +150,12 @@ int ext4_find_inline_data_nolock(struct goto out;
if (!is.s.not_found) { + if (is.s.here->e_value_inum) { + EXT4_ERROR_INODE(inode, "inline data xattr refers " + "to an external xattr inode"); + error = -EFSCORRUPTED; + goto out; + } EXT4_I(inode)->i_inline_off = (u16)((void *)is.s.here - (void *)ext4_raw_inode(&is.iloc)); EXT4_I(inode)->i_inline_size = EXT4_MIN_INLINE_DATA_SIZE +
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Theodore Ts'o tytso@mit.edu
commit eb9b5f01c33adebc31cbc236c02695f605b0e417 upstream.
If ext4_find_inline_data_nolock() returns an error it needs to get reflected up to ext4_iget(). In order to fix this, ext4_iget_extra_inode() needs to return an error (and not return void).
This is related to "ext4: do not allow external inodes for inline data" (which fixes CVE-2018-11412) in that in the errors=continue case, it would be useful to for userspace to receive an error indicating that file system is corrupted.
Signed-off-by: Theodore Ts'o tytso@mit.edu Reviewed-by: Andreas Dilger adilger@dilger.ca Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/inode.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)
--- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4634,19 +4634,21 @@ static blkcnt_t ext4_inode_blocks(struct } }
-static inline void ext4_iget_extra_inode(struct inode *inode, +static inline int ext4_iget_extra_inode(struct inode *inode, struct ext4_inode *raw_inode, struct ext4_inode_info *ei) { __le32 *magic = (void *)raw_inode + EXT4_GOOD_OLD_INODE_SIZE + ei->i_extra_isize; + if (EXT4_GOOD_OLD_INODE_SIZE + ei->i_extra_isize + sizeof(__le32) <= EXT4_INODE_SIZE(inode->i_sb) && *magic == cpu_to_le32(EXT4_XATTR_MAGIC)) { ext4_set_inode_state(inode, EXT4_STATE_XATTR); - ext4_find_inline_data_nolock(inode); + return ext4_find_inline_data_nolock(inode); } else EXT4_I(inode)->i_inline_off = 0; + return 0; }
int ext4_get_projid(struct inode *inode, kprojid_t *projid) @@ -4826,7 +4828,9 @@ struct inode *ext4_iget(struct super_blo ei->i_extra_isize = sizeof(struct ext4_inode) - EXT4_GOOD_OLD_INODE_SIZE; } else { - ext4_iget_extra_inode(inode, raw_inode, ei); + ret = ext4_iget_extra_inode(inode, raw_inode, ei); + if (ret) + goto bad_inode; } }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Theodore Ts'o tytso@mit.edu
commit 8a2b307c21d4b290e3cbe33f768f194286d07c23 upstream.
Ext4 will always create ext4 extended attributes which do not have a value (where e_value_size is zero) with e_value_offs set to zero. In most places e_value_offs will not be used in a substantive way if e_value_size is zero.
There was one exception to this, which is in ext4_xattr_set_entry(), where if there is a maliciously crafted file system where there is an extended attribute with e_value_offs is non-zero and e_value_size is 0, the attempt to remove this xattr will result in a negative value getting passed to memmove, leading to the following sadness:
[ 41.225365] EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null) [ 44.538641] BUG: unable to handle kernel paging request at ffff9ec9a3000000 [ 44.538733] IP: __memmove+0x81/0x1a0 [ 44.538755] PGD 1249bd067 P4D 1249bd067 PUD 1249c1067 PMD 80000001230000e1 [ 44.538793] Oops: 0003 [#1] SMP PTI [ 44.539074] CPU: 0 PID: 1470 Comm: poc Not tainted 4.16.0-rc1+ #1 ... [ 44.539475] Call Trace: [ 44.539832] ext4_xattr_set_entry+0x9e7/0xf80 ... [ 44.539972] ext4_xattr_block_set+0x212/0xea0 ... [ 44.540041] ext4_xattr_set_handle+0x514/0x610 [ 44.540065] ext4_xattr_set+0x7f/0x120 [ 44.540090] __vfs_removexattr+0x4d/0x60 [ 44.540112] vfs_removexattr+0x75/0xe0 [ 44.540132] removexattr+0x4d/0x80 ... [ 44.540279] path_removexattr+0x91/0xb0 [ 44.540300] SyS_removexattr+0xf/0x20 [ 44.540322] do_syscall_64+0x71/0x120 [ 44.540344] entry_SYSCALL_64_after_hwframe+0x21/0x86
https://bugzilla.kernel.org/show_bug.cgi?id=199347
This addresses CVE-2018-10840.
Reported-by: "Xu, Wen" wen.xu@gatech.edu Signed-off-by: Theodore Ts'o tytso@mit.edu Reviewed-by: Andreas Dilger adilger@dilger.ca Cc: stable@kernel.org Fixes: dec214d00e0d7 ("ext4: xattr inode deduplication") Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/xattr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/ext4/xattr.c +++ b/fs/ext4/xattr.c @@ -1687,7 +1687,7 @@ static int ext4_xattr_set_entry(struct e
/* No failures allowed past this point. */
- if (!s->not_found && here->e_value_offs) { + if (!s->not_found && here->e_value_size && here->e_value_offs) { /* Remove the old value. */ void *first_val = s->base + min_offs; size_t offs = le16_to_cpu(here->e_value_offs);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Kara jack@suse.cz
commit 4f2f76f751433908364ccff82f437a57d0e6e9b7 upstream.
ext4_resize_fs() has an off-by-one bug when checking whether growing of a filesystem will not overflow inode count. As a result it allows a filesystem with 8192 inodes per group to grow to 64TB which overflows inode count to 0 and makes filesystem unusable. Fix it.
Cc: stable@vger.kernel.org Fixes: 3f8a6411fbada1fa482276591e037f3b1adcf55b Reported-by: Jaco Kroon jaco@uls.co.za Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Reviewed-by: Andreas Dilger adilger@dilger.ca Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/resize.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/ext4/resize.c +++ b/fs/ext4/resize.c @@ -1905,7 +1905,7 @@ retry: return 0;
n_group = ext4_get_group_number(sb, n_blocks_count - 1); - if (n_group > (0xFFFFFFFFUL / EXT4_INODES_PER_GROUP(sb))) { + if (n_group >= (0xFFFFFFFFUL / EXT4_INODES_PER_GROUP(sb))) { ext4_warning(sb, "resize would cause inodes_count overflow"); return -EINVAL; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp
commit 84d0c27d6233a9ba0578b20f5a09701eb66cee42 upstream.
syzbot is hitting WARN() at kernfs_add_one() [1]. This is because kernfs_create_link() is confused by previous device_add() call which continued without setting dev->kobj.parent field when get_device_parent() failed by memory allocation fault injection. Fix this by propagating the error from class_dir_create_and_add() to the calllers of get_device_parent().
[1] https://syzkaller.appspot.com/bug?id=fae0fb607989ea744526d1c082a5b8de6529116...
Signed-off-by: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp Reported-by: syzbot syzbot+df47f81c226b31d89fb1@syzkaller.appspotmail.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: stable stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/base/core.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
--- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -1461,7 +1461,7 @@ class_dir_create_and_add(struct class *c
dir = kzalloc(sizeof(*dir), GFP_KERNEL); if (!dir) - return NULL; + return ERR_PTR(-ENOMEM);
dir->class = class; kobject_init(&dir->kobj, &class_dir_ktype); @@ -1471,7 +1471,7 @@ class_dir_create_and_add(struct class *c retval = kobject_add(&dir->kobj, parent_kobj, "%s", class->name); if (retval < 0) { kobject_put(&dir->kobj); - return NULL; + return ERR_PTR(retval); } return &dir->kobj; } @@ -1778,6 +1778,10 @@ int device_add(struct device *dev)
parent = get_device(dev->parent); kobj = get_device_parent(dev, parent); + if (IS_ERR(kobj)) { + error = PTR_ERR(kobj); + goto parent_error; + } if (kobj) dev->kobj.parent = kobj;
@@ -1876,6 +1880,7 @@ done: kobject_del(&dev->kobj); Error: cleanup_glue_dir(dev, glue_dir); +parent_error: put_device(parent); name_error: kfree(dev->p); @@ -2695,6 +2700,11 @@ int device_move(struct device *dev, stru device_pm_lock(); new_parent = get_device(new_parent); new_parent_kobj = get_device_parent(dev, new_parent); + if (IS_ERR(new_parent_kobj)) { + error = PTR_ERR(new_parent_kobj); + put_device(new_parent); + goto out; + }
pr_debug("device: '%s': %s: moving to '%s'\n", dev_name(dev), __func__, new_parent ? dev_name(new_parent) : "<NULL>");
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Omar Sandoval osandov@fb.com
commit b5c40d598f5408bd0ca22dfffa82f03cd9433f23 upstream.
In btrfs_clone_files(), we must check the NODATASUM flag while the inodes are locked. Otherwise, it's possible that btrfs_ioctl_setflags() will change the flags after we check and we can end up with a party checksummed file.
The race window is only a few instructions in size, between the if and the locks which is:
3834 if (S_ISDIR(src->i_mode) || S_ISDIR(inode->i_mode)) 3835 return -EISDIR;
where the setflags must be run and toggle the NODATASUM flag (provided the file size is 0). The clone will block on the inode lock, segflags takes the inode lock, changes flags, releases log and clone continues.
Not impossible but still needs a lot of bad luck to hit unintentionally.
Fixes: 0e7b824c4ef9 ("Btrfs: don't make a file partly checksummed through file clone") CC: stable@vger.kernel.org # 4.4+ Signed-off-by: Omar Sandoval osandov@fb.com Reviewed-by: Nikolay Borisov nborisov@suse.com Reviewed-by: David Sterba dsterba@suse.com [ update changelog ] Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/ioctl.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)
--- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -3861,11 +3861,6 @@ static noinline int btrfs_clone_files(st src->i_sb != inode->i_sb) return -EXDEV;
- /* don't make the dst file partly checksummed */ - if ((BTRFS_I(src)->flags & BTRFS_INODE_NODATASUM) != - (BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) - return -EINVAL; - if (S_ISDIR(src->i_mode) || S_ISDIR(inode->i_mode)) return -EISDIR;
@@ -3875,6 +3870,13 @@ static noinline int btrfs_clone_files(st inode_lock(src); }
+ /* don't make the dst file partly checksummed */ + if ((BTRFS_I(src)->flags & BTRFS_INODE_NODATASUM) != + (BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) { + ret = -EINVAL; + goto out_unlock; + } + /* determine range to clone */ ret = -EINVAL; if (off + len > src->i_size || off + len < off)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Omar Sandoval osandov@fb.com
commit fd4e994bd1f9dc9628e168a7f619bf69f6984635 upstream.
If we have invalid flags set, when we error out we must drop our writer counter and free the buffer we allocated for the arguments. This bug is trivially reproduced with the following program on 4.7+:
#include <fcntl.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/ioctl.h> #include <sys/stat.h> #include <sys/types.h> #include <linux/btrfs.h> #include <linux/btrfs_tree.h>
int main(int argc, char **argv) { struct btrfs_ioctl_vol_args_v2 vol_args = { .flags = UINT64_MAX, }; int ret; int fd;
if (argc != 2) { fprintf(stderr, "usage: %s PATH\n", argv[0]); return EXIT_FAILURE; }
fd = open(argv[1], O_WRONLY); if (fd == -1) { perror("open"); return EXIT_FAILURE; }
ret = ioctl(fd, BTRFS_IOC_RM_DEV_V2, &vol_args); if (ret == -1) perror("ioctl");
close(fd); return EXIT_SUCCESS; }
When unmounting the filesystem, we'll hit the WARN_ON(mnt_get_writers(mnt)) in cleanup_mnt() and also may prevent the filesystem to be remounted read-only as the writer count will stay lifted.
Fixes: 6b526ed70cf1 ("btrfs: introduce device delete by devid") CC: stable@vger.kernel.org # 4.9+ Signed-off-by: Omar Sandoval osandov@fb.com Reviewed-by: Su Yue suy.fnst@cn.fujitsu.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/ioctl.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -2682,8 +2682,10 @@ static long btrfs_ioctl_rm_dev_v2(struct }
/* Check for compatibility reject unknown flags */ - if (vol_args->flags & ~BTRFS_VOL_ARG_V2_FLAGS_SUPPORTED) - return -EOPNOTSUPP; + if (vol_args->flags & ~BTRFS_VOL_ARG_V2_FLAGS_SUPPORTED) { + ret = -EOPNOTSUPP; + goto out; + }
if (test_and_set_bit(BTRFS_FS_EXCL_OP, &fs_info->flags)) { ret = BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Su Yue suy.fnst@cn.fujitsu.com
commit 090a127afa8f73e9618d4058d6755f7ec7453dd6 upstream.
In cow_file_range(), create_io_em() may fail, but its return value is not recorded. Then return value may be 0 even it failed which is a wrong behavior.
Let cow_file_range() return PTR_ERR(em) if create_io_em() failed.
Fixes: 6f9994dbabe5 ("Btrfs: create a helper to create em for IO") CC: stable@vger.kernel.org # 4.11+ Signed-off-by: Su Yue suy.fnst@cn.fujitsu.com Reviewed-by: Nikolay Borisov nborisov@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/inode.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1027,8 +1027,10 @@ static noinline int cow_file_range(struc ram_size, /* ram_bytes */ BTRFS_COMPRESS_NONE, /* compress_type */ BTRFS_ORDERED_REGULAR /* type */); - if (IS_ERR(em)) + if (IS_ERR(em)) { + ret = PTR_ERR(em); goto out_reserve; + } free_extent_map(em);
ret = btrfs_add_ordered_extent(inode, start, ins.objectid,
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Qu Wenruo wqu@suse.com
commit ac0b4145d662a3b9e34085dea460fb06ede9b69b upstream.
[BUG] Btrfs can create compressed extent without checksum (even though it shouldn't), and if we then try to replace device containing such extent, the result device will contain all the uncompressed data instead of the compressed one.
Test case already submitted to fstests: https://patchwork.kernel.org/patch/10442353/
[CAUSE] When handling compressed extent without checksum, device replace will goe into copy_nocow_pages() function.
In that function, btrfs will get all inodes referring to this data extents and then use find_or_create_page() to get pages direct from that inode.
The problem here is, pages directly from inode are always uncompressed. And for compressed data extent, they mismatch with on-disk data. Thus this leads to corrupted compressed data extent written to replace device.
[FIX] In this attempt, we could just remove the "optimization" branch, and let unified scrub_pages() to handle it.
Although scrub_pages() won't bother reusing page cache, it will be a little slower, but it does the correct csum checking and won't cause such data corruption caused by "optimization".
Note about the fix: this is the minimal fix that can be backported to older stable trees without conflicts. The whole callchain from copy_nocow_pages() can be deleted, and will be in followup patches.
Fixes: ff023aac3119 ("Btrfs: add code to scrub to copy read data to another disk") CC: stable@vger.kernel.org # 4.4+ Reported-by: James Harvey jamespharvey20@gmail.com Reviewed-by: James Harvey jamespharvey20@gmail.com Signed-off-by: Qu Wenruo wqu@suse.com [ remove code removal, add note why ] Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/scrub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -2775,7 +2775,7 @@ static int scrub_extent(struct scrub_ctx have_csum = scrub_find_csum(sctx, logical, csum); if (have_csum == 0) ++sctx->stat.no_csum; - if (sctx->is_dev_replace && !have_csum) { + if (0 && sctx->is_dev_replace && !have_csum) { ret = copy_nocow_pages(sctx, logical, l, mirror_num, physical_for_dev_replace);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hui Wang hui.wang@canonical.com
commit 986376b68dcc95bb7df60ad30c2353c1f7578fa5 upstream.
We have several Lenovo AIOs like M810z, M820z and M920z, they have the same design for mic-mute hotkey and led and they use the same codec with the same pin configuration, so use the pin conf table to apply fix to all of them.
Fixes: 29693efcea0f ("ALSA: hda - Fix micmute hotkey problem for a lenovo AIO machine") Cc: stable@vger.kernel.org Signed-off-by: Hui Wang hui.wang@canonical.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/patch_realtek.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -6439,7 +6439,6 @@ static const struct snd_pci_quirk alc269 SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), SND_PCI_QUIRK(0x17aa, 0x3138, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), - SND_PCI_QUIRK(0x17aa, 0x3112, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP), @@ -6610,6 +6609,11 @@ static const struct snd_hda_pin_quirk al {0x12, 0x90a60140}, {0x14, 0x90170110}, {0x21, 0x02211020}), + SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY, + {0x12, 0x90a60140}, + {0x14, 0x90170110}, + {0x19, 0x02a11030}, + {0x21, 0x02211020}), SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, {0x12, 0x90a60140}, {0x14, 0x90170150},
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
commit f16041df4c360eccacfe90f96673b37829e4c959 upstream.
HP Z2 G4 requires the same workaround as other HP machines that have no mic-pin detection.
Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/patch_conexant.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -965,6 +965,7 @@ static const struct snd_pci_quirk cxt506 SND_PCI_QUIRK(0x103c, 0x822e, "HP ProBook 440 G4", CXT_FIXUP_MUTE_LED_GPIO), SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN), SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO), SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bo Chen chenbo@pdx.edu
commit a3aa60d511746bd6c0d0366d4eb90a7998bcde8b upstream.
When 'kzalloc()' fails in 'snd_hda_attach_pcm_stream()', a new pcm instance is created without setting its operators via 'snd_pcm_set_ops()'. Following operations on the new pcm instance can trigger kernel null pointer dereferences and cause kernel oops.
This bug was found with my work on building a gray-box fault-injection tool for linux-kernel-module binaries. A kernel null pointer dereference was confirmed from line 'substream->ops->open()' in function 'snd_pcm_open_substream()' in file 'sound/core/pcm_native.c'.
This patch fixes the bug by calling 'snd_device_free()' in the error handling path of 'kzalloc()', which removes the new pcm instance from the snd card before returns with an error code.
Signed-off-by: Bo Chen chenbo@pdx.edu Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/hda_controller.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/sound/pci/hda/hda_controller.c +++ b/sound/pci/hda/hda_controller.c @@ -748,8 +748,10 @@ int snd_hda_attach_pcm_stream(struct hda return err; strlcpy(pcm->name, cpcm->name, sizeof(pcm->name)); apcm = kzalloc(sizeof(*apcm), GFP_KERNEL); - if (apcm == NULL) + if (apcm == NULL) { + snd_device_free(chip->card, pcm); return -ENOMEM; + } apcm->chip = chip; apcm->pcm = pcm; apcm->codec = codec;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dennis Wassenberg dennis.wassenberg@secunet.com
commit 2861751f67b91e1d24e68010ced96614fb3140f4 upstream.
This patch adds missing initialisation for HP 2013 UltraSlim Dock Line-In/Out PINs and activates keyboard mute/micmute leds for HP EliteBook 830 G5
Signed-off-by: Dennis Wassenberg dennis.wassenberg@secunet.com Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/patch_conexant.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -959,6 +959,7 @@ static const struct snd_pci_quirk cxt506 SND_PCI_QUIRK(0x103c, 0x8079, "HP EliteBook 840 G3", CXT_FIXUP_HP_DOCK), SND_PCI_QUIRK(0x103c, 0x807C, "HP EliteBook 820 G3", CXT_FIXUP_HP_DOCK), SND_PCI_QUIRK(0x103c, 0x80FD, "HP ProBook 640 G2", CXT_FIXUP_HP_DOCK), + SND_PCI_QUIRK(0x103c, 0x83b3, "HP EliteBook 830 G5", CXT_FIXUP_HP_DOCK), SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE), SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC), SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO),
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dennis Wassenberg dennis.wassenberg@secunet.com
commit 7eef32c1ef895a3a96463f9cbd04203007cd5555 upstream.
This patch adds missing initialisation for HP 2013 UltraSlim Dock Line-In/Out PINs and activates keyboard mute/micmute leds for HP ProBook 640 G4
Signed-off-by: Dennis Wassenberg dennis.wassenberg@secunet.com Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/patch_conexant.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -960,6 +960,7 @@ static const struct snd_pci_quirk cxt506 SND_PCI_QUIRK(0x103c, 0x807C, "HP EliteBook 820 G3", CXT_FIXUP_HP_DOCK), SND_PCI_QUIRK(0x103c, 0x80FD, "HP ProBook 640 G2", CXT_FIXUP_HP_DOCK), SND_PCI_QUIRK(0x103c, 0x83b3, "HP EliteBook 830 G5", CXT_FIXUP_HP_DOCK), + SND_PCI_QUIRK(0x103c, 0x83d3, "HP ProBook 640 G4", CXT_FIXUP_HP_DOCK), SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE), SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC), SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO),
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tony Luck tony.luck@intel.com
commit 985c78d3ff8e9c74450fa2bb08eb55e680d999ca upstream.
Each of the strings that we want to put into the buf[MAX_FLAG_OPT_SIZE] in flags_read() is two characters long. But the sprintf() adds a trailing newline and will add a terminating NUL byte. So MAX_FLAG_OPT_SIZE needs to be 4.
sprintf() calls vsnprintf() and *that* does return:
" * The return value is the number of characters which would * be generated for the given input, excluding the trailing * '\0', as per ISO C99."
Note the "excluding".
Reported-by: Dmitry Vyukov dvyukov@google.com Signed-off-by: Tony Luck tony.luck@intel.com Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: stable@vger.kernel.org Cc: linux-edac linux-edac@vger.kernel.org Link: http://lkml.kernel.org/r/20180427163707.ktaiysvbk3yhk4wm@agluck-desk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/cpu/mcheck/mce-inject.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/cpu/mcheck/mce-inject.c +++ b/arch/x86/kernel/cpu/mcheck/mce-inject.c @@ -48,7 +48,7 @@ static struct dentry *dfs_inj;
static u8 n_banks;
-#define MAX_FLAG_OPT_SIZE 3 +#define MAX_FLAG_OPT_SIZE 4 #define NBCFG 0x44
enum injection_type {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steve French stfrench@microsoft.com
commit cfe89091644c441a1ade6dae6d2e47b715648615 upstream.
Fix a few cases where we were not freeing the xid which led to active requests being non-zero at unmount time.
Signed-off-by: Steve French smfrench@gmail.com CC: Stable stable@vger.kernel.org Reviewed-by: Ronnie Sahlberg lsahlber@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/cifs/smb2ops.c | 63 +++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 44 insertions(+), 19 deletions(-)
--- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -1571,8 +1571,11 @@ get_smb2_acl_by_path(struct cifs_sb_info oparms.create_options = 0;
utf16_path = cifs_convert_path_to_utf16(path, cifs_sb); - if (!utf16_path) - return ERR_PTR(-ENOMEM); + if (!utf16_path) { + rc = -ENOMEM; + free_xid(xid); + return ERR_PTR(rc); + }
oparms.tcon = tcon; oparms.desired_access = READ_CONTROL; @@ -1630,8 +1633,11 @@ set_smb2_acl(struct cifs_ntsd *pnntsd, _ access_flags = WRITE_DAC;
utf16_path = cifs_convert_path_to_utf16(path, cifs_sb); - if (!utf16_path) - return -ENOMEM; + if (!utf16_path) { + rc = -ENOMEM; + free_xid(xid); + return rc; + }
oparms.tcon = tcon; oparms.desired_access = access_flags; @@ -1691,15 +1697,21 @@ static long smb3_zero_range(struct file
/* if file not oplocked can't be sure whether asking to extend size */ if (!CIFS_CACHE_READ(cifsi)) - if (keep_size == false) - return -EOPNOTSUPP; + if (keep_size == false) { + rc = -EOPNOTSUPP; + free_xid(xid); + return rc; + }
/* * Must check if file sparse since fallocate -z (zero range) assumes * non-sparse allocation */ - if (!(cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE)) - return -EOPNOTSUPP; + if (!(cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE)) { + rc = -EOPNOTSUPP; + free_xid(xid); + return rc; + }
/* * need to make sure we are not asked to extend the file since the SMB3 @@ -1708,8 +1720,11 @@ static long smb3_zero_range(struct file * which for a non sparse file would zero the newly extended range */ if (keep_size == false) - if (i_size_read(inode) < offset + len) - return -EOPNOTSUPP; + if (i_size_read(inode) < offset + len) { + rc = -EOPNOTSUPP; + free_xid(xid); + return rc; + }
cifs_dbg(FYI, "offset %lld len %lld", offset, len);
@@ -1743,8 +1758,11 @@ static long smb3_punch_hole(struct file
/* Need to make file sparse, if not already, before freeing range. */ /* Consider adding equivalent for compressed since it could also work */ - if (!smb2_set_sparse(xid, tcon, cfile, inode, set_sparse)) - return -EOPNOTSUPP; + if (!smb2_set_sparse(xid, tcon, cfile, inode, set_sparse)) { + rc = -EOPNOTSUPP; + free_xid(xid); + return rc; + }
cifs_dbg(FYI, "offset %lld len %lld", offset, len);
@@ -1776,8 +1794,10 @@ static long smb3_simple_falloc(struct fi
/* if file not oplocked can't be sure whether asking to extend size */ if (!CIFS_CACHE_READ(cifsi)) - if (keep_size == false) - return -EOPNOTSUPP; + if (keep_size == false) { + free_xid(xid); + return rc; + }
/* * Files are non-sparse by default so falloc may be a no-op @@ -1786,14 +1806,16 @@ static long smb3_simple_falloc(struct fi */ if ((cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE) == 0) { if (keep_size == true) - return 0; + rc = 0; /* check if extending file */ else if (i_size_read(inode) >= off + len) /* not extending file and already not sparse */ - return 0; + rc = 0; /* BB: in future add else clause to extend file */ else - return -EOPNOTSUPP; + rc = -EOPNOTSUPP; + free_xid(xid); + return rc; }
if ((keep_size == true) || (i_size_read(inode) >= off + len)) { @@ -1805,8 +1827,11 @@ static long smb3_simple_falloc(struct fi * ie potentially making a few extra pages at the beginning * or end of the file non-sparse via set_sparse is harmless. */ - if ((off > 8192) || (off + len + 8192 < i_size_read(inode))) - return -EOPNOTSUPP; + if ((off > 8192) || (off + len + 8192 < i_size_read(inode))) { + rc = -EOPNOTSUPP; + free_xid(xid); + return rc; + }
rc = smb2_set_sparse(xid, tcon, cfile, inode, false); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steve French stfrench@microsoft.com
commit b2adf22fdfba85a6701c481faccdbbb3a418ccfc upstream.
The server detects reconnect by the (non-zero) value in PreviousSessionId of SMB2/SMB3 SessionSetup request, but this behavior regressed due to commit 166cea4dc3a4f66f020cfb9286225ecd228ab61d ("SMB2: Separate RawNTLMSSP authentication from SMB2_sess_setup")
CC: Stable stable@vger.kernel.org CC: Sachin Prabhu sprabhu@redhat.com Signed-off-by: Steve French smfrench@gmail.com Reviewed-by: Ronnie Sahlberg lsahlber@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/cifs/smb2pdu.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/cifs/smb2pdu.c +++ b/fs/cifs/smb2pdu.c @@ -1182,6 +1182,7 @@ SMB2_sess_setup(const unsigned int xid, sess_data->ses = ses; sess_data->buf0_type = CIFS_NO_BUFFER; sess_data->nls_cp = (struct nls_table *) nls_cp; + sess_data->previous_session = ses->Suid;
while (sess_data->func) sess_data->func(sess_data);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mark Syms mark.syms@citrix.com
commit d81243c697ffc71f983736e7da2db31a8be0001f upstream.
Handle this additional status in the same way as SESSION_EXPIRED.
Signed-off-by: Mark Syms mark.syms@citrix.com Signed-off-by: Steve French stfrench@microsoft.com CC: Stable stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/cifs/smb2ops.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -1256,10 +1256,11 @@ smb2_is_session_expired(char *buf) { struct smb2_sync_hdr *shdr = get_sync_hdr(buf);
- if (shdr->Status != STATUS_NETWORK_SESSION_EXPIRED) + if (shdr->Status != STATUS_NETWORK_SESSION_EXPIRED && + shdr->Status != STATUS_USER_SESSION_DELETED) return false;
- cifs_dbg(FYI, "Session expired\n"); + cifs_dbg(FYI, "Session expired or deleted\n"); return true; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shirish Pargaonkar shirishpargaonkar@gmail.com
commit ee25c6dd7b05113783ce1f4fab6b30fc00d29b8d upstream.
Validate_buf () function checks for an expected minimum sized response passed to query_info() function. For security information, the size of a security descriptor can be smaller (one subauthority, no ACEs) than the size of the structure that defines FileInfoClass of FileAllInformation.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=199725 Cc: stable@vger.kernel.org Signed-off-by: Shirish Pargaonkar shirishpargaonkar@gmail.com Reviewed-by: Noah Morrison noah.morrison@rubrik.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/cifs/cifsacl.h | 14 ++++++++++++++ fs/cifs/smb2pdu.c | 3 +-- 2 files changed, 15 insertions(+), 2 deletions(-)
--- a/fs/cifs/cifsacl.h +++ b/fs/cifs/cifsacl.h @@ -98,4 +98,18 @@ struct cifs_ace { struct cifs_sid sid; /* ie UUID of user or group who gets these perms */ } __attribute__((packed));
+/* + * Minimum security identifier can be one for system defined Users + * and Groups such as NULL SID and World or Built-in accounts such + * as Administrator and Guest and consists of + * Revision + Num (Sub)Auths + Authority + Domain (one Subauthority) + */ +#define MIN_SID_LEN (1 + 1 + 6 + 4) /* in bytes */ + +/* + * Minimum security descriptor can be one without any SACL and DACL and can + * consist of revision, type, and two sids of minimum size for owner and group + */ +#define MIN_SEC_DESC_LEN (sizeof(struct cifs_ntsd) + (2 * MIN_SID_LEN)) + #endif /* _CIFSACL_H */ --- a/fs/cifs/smb2pdu.c +++ b/fs/cifs/smb2pdu.c @@ -2279,8 +2279,7 @@ SMB2_query_acl(const unsigned int xid, s
return query_info(xid, tcon, persistent_fid, volatile_fid, 0, SMB2_O_INFO_SECURITY, additional_info, - SMB2_MAX_BUFFER_SIZE, - sizeof(struct smb2_file_all_info), data, plen); + SMB2_MAX_BUFFER_SIZE, MIN_SEC_DESC_LEN, data, plen); }
int
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Josef Bacik jbacik@fb.com
commit 8364da4751cf22201d74933d5e634176f44ed407 upstream.
This fixes a use after free bug, we shouldn't be doing disk->queue right after we do del_gendisk(disk). Save the queue and do the cleanup after the del_gendisk.
Fixes: c6a4759ea0c9 ("nbd: add device refcounting") cc: stable@vger.kernel.org Signed-off-by: Josef Bacik jbacik@fb.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/block/nbd.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -173,9 +173,12 @@ static const struct device_attribute pid static void nbd_dev_remove(struct nbd_device *nbd) { struct gendisk *disk = nbd->disk; + struct request_queue *q; + if (disk) { + q = disk->queue; del_gendisk(disk); - blk_cleanup_queue(disk->queue); + blk_cleanup_queue(q); blk_mq_free_tag_set(&nbd->tag_set); disk->private_data = NULL; put_disk(disk);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Josef Bacik jbacik@fb.com
commit c3f7c9397609705ef848cc98a5fb429b3e90c3c4 upstream.
I messed up changing the size of an NBD device while it was connected by not actually updating the device or doing the uevent. Fix this by updating everything if we're connected and we change the size.
cc: stable@vger.kernel.org Fixes: 639812a ("nbd: don't set the device size until we're connected") Signed-off-by: Josef Bacik jbacik@fb.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/block/nbd.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -246,6 +246,8 @@ static void nbd_size_set(struct nbd_devi struct nbd_config *config = nbd->config; config->blksize = blocksize; config->bytesize = blocksize * nr_blocks; + if (nbd->task_recv != NULL) + nbd_size_update(nbd); }
static void nbd_complete_rq(struct request *req)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Josef Bacik jbacik@fb.com
commit 9e2b19675d1338d2a38e99194756f2db44a081df upstream.
When we stopped relying on the bdev everywhere I broke updating the block device size on the fly, which ceph relies on. We can't just do set_capacity, we also have to do bd_set_size so things like parted will notice the device size change.
Fixes: 29eaadc ("nbd: stop using the bdev everywhere") cc: stable@vger.kernel.org Signed-off-by: Josef Bacik jbacik@fb.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/block/nbd.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
--- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -234,9 +234,18 @@ static void nbd_size_clear(struct nbd_de static void nbd_size_update(struct nbd_device *nbd) { struct nbd_config *config = nbd->config; + struct block_device *bdev = bdget_disk(nbd->disk, 0); + blk_queue_logical_block_size(nbd->disk->queue, config->blksize); blk_queue_physical_block_size(nbd->disk->queue, config->blksize); set_capacity(nbd->disk, config->bytesize >> 9); + if (bdev) { + if (bdev->bd_disk) + bd_set_size(bdev, config->bytesize); + else + bdev->bd_invalidated = 1; + bdput(bdev); + } kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE); }
@@ -1114,7 +1123,6 @@ static int nbd_start_device_ioctl(struct if (ret) return ret;
- bd_set_size(bdev, config->bytesize); if (max_part) bdev->bd_invalidated = 1; mutex_unlock(&nbd->config_lock);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Roman Pen roman.penyaev@profitbricks.com
commit a347c7ad8edf4c5685154f3fdc3c12fc1db800ba upstream.
It is not allowed to reinit q->tag_set_list list entry while RCU grace period has not completed yet, otherwise the following soft lockup in blk_mq_sched_restart() happens:
[ 1064.252652] watchdog: BUG: soft lockup - CPU#12 stuck for 23s! [fio:9270] [ 1064.254445] task: ffff99b912e8b900 task.stack: ffffa6d54c758000 [ 1064.254613] RIP: 0010:blk_mq_sched_restart+0x96/0x150 [ 1064.256510] Call Trace: [ 1064.256664] <IRQ> [ 1064.256824] blk_mq_free_request+0xea/0x100 [ 1064.256987] msg_io_conf+0x59/0xd0 [ibnbd_client] [ 1064.257175] complete_rdma_req+0xf2/0x230 [ibtrs_client] [ 1064.257340] ? ibtrs_post_recv_empty+0x4d/0x70 [ibtrs_core] [ 1064.257502] ibtrs_clt_rdma_done+0xd1/0x1e0 [ibtrs_client] [ 1064.257669] ib_create_qp+0x321/0x380 [ib_core] [ 1064.257841] ib_process_cq_direct+0xbd/0x120 [ib_core] [ 1064.258007] irq_poll_softirq+0xb7/0xe0 [ 1064.258165] __do_softirq+0x106/0x2a2 [ 1064.258328] irq_exit+0x92/0xa0 [ 1064.258509] do_IRQ+0x4a/0xd0 [ 1064.258660] common_interrupt+0x7a/0x7a [ 1064.258818] </IRQ>
Meanwhile another context frees other queue but with the same set of shared tags:
[ 1288.201183] INFO: task bash:5910 blocked for more than 180 seconds. [ 1288.201833] bash D 0 5910 5820 0x00000000 [ 1288.202016] Call Trace: [ 1288.202315] schedule+0x32/0x80 [ 1288.202462] schedule_timeout+0x1e5/0x380 [ 1288.203838] wait_for_completion+0xb0/0x120 [ 1288.204137] __wait_rcu_gp+0x125/0x160 [ 1288.204287] synchronize_sched+0x6e/0x80 [ 1288.204770] blk_mq_free_queue+0x74/0xe0 [ 1288.204922] blk_cleanup_queue+0xc7/0x110 [ 1288.205073] ibnbd_clt_unmap_device+0x1bc/0x280 [ibnbd_client] [ 1288.205389] ibnbd_clt_unmap_dev_store+0x169/0x1f0 [ibnbd_client] [ 1288.205548] kernfs_fop_write+0x109/0x180 [ 1288.206328] vfs_write+0xb3/0x1a0 [ 1288.206476] SyS_write+0x52/0xc0 [ 1288.206624] do_syscall_64+0x68/0x1d0 [ 1288.206774] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
What happened is the following:
1. There are several MQ queues with shared tags. 2. One queue is about to be freed and now task is in blk_mq_del_queue_tag_set(). 3. Other CPU is in blk_mq_sched_restart() and loops over all queues in tag list in order to find hctx to restart.
Because linked list entry was modified in blk_mq_del_queue_tag_set() without proper waiting for a grace period, blk_mq_sched_restart() never ends, spining in list_for_each_entry_rcu_rr(), thus soft lockup.
Fix is simple: reinit list entry after an RCU grace period elapsed.
Fixes: Fixes: 705cda97ee3a ("blk-mq: Make it safe to use RCU to iterate over blk_mq_tag_set.tag_list") Cc: stable@vger.kernel.org Cc: Sagi Grimberg sagi@grimberg.me Cc: linux-block@vger.kernel.org Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Ming Lei ming.lei@redhat.com Reviewed-by: Bart Van Assche bart.vanassche@wdc.com Signed-off-by: Roman Pen roman.penyaev@profitbricks.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- block/blk-mq.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2252,7 +2252,6 @@ static void blk_mq_del_queue_tag_set(str
mutex_lock(&set->tag_list_lock); list_del_rcu(&q->tag_set_list); - INIT_LIST_HEAD(&q->tag_set_list); if (list_is_singular(&set->tag_list)) { /* just transitioned to unshared */ set->flags &= ~BLK_MQ_F_TAG_SHARED; @@ -2260,8 +2259,8 @@ static void blk_mq_del_queue_tag_set(str blk_mq_update_tag_set_depth(set, false); } mutex_unlock(&set->tag_list_lock); - synchronize_rcu(); + INIT_LIST_HEAD(&q->tag_set_list); }
static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tejun Heo tj@kernel.org
commit f183464684190bacbfb14623bd3e4e51b7575b4c upstream.
From 0aa2e9b921d6db71150633ff290199554f0842a8 Mon Sep 17 00:00:00 2001
From: Tejun Heo tj@kernel.org Date: Wed, 23 May 2018 10:29:00 -0700
cgwb_release() punts the actual release to cgwb_release_workfn() on system_wq. Depending on the number of cgroups or block devices, there can be a lot of cgwb_release_workfn() in flight at the same time.
We're periodically seeing close to 256 kworkers getting stuck with the following stack trace and overtime the entire system gets stuck.
[<ffffffff810ee40c>] _synchronize_rcu_expedited.constprop.72+0x2fc/0x330 [<ffffffff810ee634>] synchronize_rcu_expedited+0x24/0x30 [<ffffffff811ccf23>] bdi_unregister+0x53/0x290 [<ffffffff811cd1e9>] release_bdi+0x89/0xc0 [<ffffffff811cd645>] wb_exit+0x85/0xa0 [<ffffffff811cdc84>] cgwb_release_workfn+0x54/0xb0 [<ffffffff810a68d0>] process_one_work+0x150/0x410 [<ffffffff810a71fd>] worker_thread+0x6d/0x520 [<ffffffff810ad3dc>] kthread+0x12c/0x160 [<ffffffff81969019>] ret_from_fork+0x29/0x40 [<ffffffffffffffff>] 0xffffffffffffffff
The events leading to the lockup are...
1. A lot of cgwb_release_workfn() is queued at the same time and all system_wq kworkers are assigned to execute them.
2. They all end up calling synchronize_rcu_expedited(). One of them wins and tries to perform the expedited synchronization.
3. However, that invovles queueing rcu_exp_work to system_wq and waiting for it. Because #1 is holding all available kworkers on system_wq, rcu_exp_work can't be executed. cgwb_release_workfn() is waiting for synchronize_rcu_expedited() which in turn is waiting for cgwb_release_workfn() to free up some of the kworkers.
We shouldn't be scheduling hundreds of cgwb_release_workfn() at the same time. There's nothing to be gained from that. This patch updates cgwb release path to use a dedicated percpu workqueue with @max_active of 1.
While this resolves the problem at hand, it might be a good idea to isolate rcu_exp_work to its own workqueue too as it can be used from various paths and is prone to this sort of indirect A-A deadlocks.
Signed-off-by: Tejun Heo tj@kernel.org Cc: "Paul E. McKenney" paulmck@linux.vnet.ibm.com Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- mm/backing-dev.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-)
--- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -409,6 +409,7 @@ static void wb_exit(struct bdi_writeback * protected. */ static DEFINE_SPINLOCK(cgwb_lock); +static struct workqueue_struct *cgwb_release_wq;
/** * wb_congested_get_create - get or create a wb_congested @@ -519,7 +520,7 @@ static void cgwb_release(struct percpu_r { struct bdi_writeback *wb = container_of(refcnt, struct bdi_writeback, refcnt); - schedule_work(&wb->release_work); + queue_work(cgwb_release_wq, &wb->release_work); }
static void cgwb_kill(struct bdi_writeback *wb) @@ -783,6 +784,21 @@ static void cgwb_bdi_register(struct bac spin_unlock_irq(&cgwb_lock); }
+static int __init cgwb_init(void) +{ + /* + * There can be many concurrent release work items overwhelming + * system_wq. Put them in a separate wq and limit concurrency. + * There's no point in executing many of these in parallel. + */ + cgwb_release_wq = alloc_workqueue("cgwb_release", 0, 1); + if (!cgwb_release_wq) + return -ENOMEM; + + return 0; +} +subsys_initcall(cgwb_init); + #else /* CONFIG_CGROUP_WRITEBACK */
static int cgwb_bdi_init(struct backing_dev_info *bdi)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tao Wang kevin.wangtao@hisilicon.com
commit c7d1f119c48f64bebf0fa1e326af577c6152fe30 upstream.
If the policy limits are updated via cpufreq_update_policy() and subsequently via sysfs, the limits stored in user_policy may be set incorrectly.
For example, if both min and max are set via sysfs to the maximum available frequency, user_policy.min and user_policy.max will also be the maximum. If a policy notifier triggered by cpufreq_update_policy() lowers both the min and the max at this point, that change is not reflected by the user_policy limits, so if the max is updated again via sysfs to the same lower value, then user_policy.max will be lower than user_policy.min which shouldn't happen. In particular, if one of the policy CPUs is then taken offline and back online, cpufreq_set_policy() will fail for it due to a failing limits check.
To prevent that from happening, initialize the min and max fields of the new_policy object to the ones stored in user_policy that were previously set via sysfs.
Signed-off-by: Kevin Wangtao kevin.wangtao@hisilicon.com Acked-by: Viresh Kumar viresh.kumar@linaro.org [ rjw: Subject & changelog ] Cc: All applicable stable@vger.kernel.org Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/cpufreq/cpufreq.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -693,6 +693,8 @@ static ssize_t store_##file_name \ struct cpufreq_policy new_policy; \ \ memcpy(&new_policy, policy, sizeof(*policy)); \ + new_policy.min = policy->user_policy.min; \ + new_policy.max = policy->user_policy.max; \ \ ret = sscanf(buf, "%u", &new_policy.object); \ if (ret != 1) \
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chen Yu yu.c.chen@intel.com
commit 7592019634f8473f0b0973ce79297183077bdbc2 upstream.
According to current code implementation, detecting the long idle period is done by checking if the interval between two adjacent utilization update handlers is long enough. Although this mechanism can detect if the idle period is long enough (no utilization hooks invoked during idle period), it might not cover a corner case: if the task has occupied the CPU for too long which causes no context switches during that period, then no utilization handler will be launched until this high prio task is scheduled out. As a result, the idle_periods field might be calculated incorrectly because it regards the 100% load as 0% and makes the conservative governor who uses this field confusing.
Change the detection to compare the idle_time with sampling_rate directly.
Reported-by: Artem S. Tashkinov t.artem@mailcity.com Signed-off-by: Chen Yu yu.c.chen@intel.com Acked-by: Viresh Kumar viresh.kumar@linaro.org Cc: All applicable stable@vger.kernel.org Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/cpufreq/cpufreq_governor.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-)
--- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -165,7 +165,7 @@ unsigned int dbs_update(struct cpufreq_p * calls, so the previous load value can be used then. */ load = j_cdbs->prev_load; - } else if (unlikely(time_elapsed > 2 * sampling_rate && + } else if (unlikely((int)idle_time > 2 * sampling_rate && j_cdbs->prev_load)) { /* * If the CPU had gone completely idle and a task has @@ -185,10 +185,8 @@ unsigned int dbs_update(struct cpufreq_p * clear prev_load to guarantee that the load will be * computed again next time. * - * Detecting this situation is easy: the governor's - * utilization update handler would not have run during - * CPU-idle periods. Hence, an unusually large - * 'time_elapsed' (as compared to the sampling rate) + * Detecting this situation is easy: an unusually large + * 'idle_time' (as compared to the sampling rate) * indicates this scenario. */ load = j_cdbs->prev_load; @@ -217,8 +215,8 @@ unsigned int dbs_update(struct cpufreq_p j_cdbs->prev_load = load; }
- if (time_elapsed > 2 * sampling_rate) { - unsigned int periods = time_elapsed / sampling_rate; + if (unlikely((int)idle_time > 2 * sampling_rate)) { + unsigned int periods = idle_time / sampling_rate;
if (periods < idle_periods) idle_periods = periods;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter dan.carpenter@oracle.com
commit 18c9a99bce2a57dfd7e881658703b5d7469cc7b9 upstream.
We read from the cdb[] buffer in ata_exec_internal_sg(). It has to be ATAPI_CDB_LEN (16) bytes long, but this buffer is only 12 bytes.
Fixes: 213342053db5 ("libata: handle power transition of ODD") Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Signed-off-by: Tejun Heo tj@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/ata/libata-zpodd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/ata/libata-zpodd.c +++ b/drivers/ata/libata-zpodd.c @@ -35,7 +35,7 @@ struct zpodd { static int eject_tray(struct ata_device *dev) { struct ata_taskfile tf; - static const char cdb[] = { GPCMD_START_STOP_UNIT, + static const char cdb[ATAPI_CDB_LEN] = { GPCMD_START_STOP_UNIT, 0, 0, 0, 0x02, /* LoEj */ 0, 0, 0, 0, 0, 0, 0,
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans de Goede hdegoede@redhat.com
commit 2cfce3a86b64b53f0a70e92a6a659c720c319b45 upstream.
Commit 184add2ca23c ("libata: Apply NOLPM quirk for SanDisk SD7UB3Q*G1001 SSDs") disabled LPM for SanDisk SD7UB3Q*G1001 SSDs.
This has lead to several reports of users of that SSD where LPM was working fine and who know have a significantly increased idle power consumption on their laptops.
Likely there is another problem on the T450s from the original reporter which gets exposed by the uncore reaching deeper sleep states (higher PC-states) due to LPM being enabled. The problem as reported, a hardfreeze about once a day, already did not sound like it would be caused by LPM and the reports of the SSD working fine confirm this. The original reporter is ok with dropping the quirk.
A X250 user has reported the same hard freeze problem and for him the problem went away after unrelated updates, I suspect some GPU driver stack changes fixed things.
TL;DR: The original reporters problem were triggered by LPM but not an LPM issue, so drop the quirk for the SSD in question.
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1583207 Cc: stable@vger.kernel.org Cc: Richard W.M. Jones rjones@redhat.com Cc: Lorenzo Dalrio lorenzo.dalrio@gmail.com Reported-by: Lorenzo Dalrio lorenzo.dalrio@gmail.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Tejun Heo tj@kernel.org Acked-by: "Richard W.M. Jones" rjones@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/ata/libata-core.c | 3 --- 1 file changed, 3 deletions(-)
--- a/drivers/ata/libata-core.c +++ b/drivers/ata/libata-core.c @@ -4543,9 +4543,6 @@ static const struct ata_blacklist_entry ATA_HORKAGE_ZERO_AFTER_TRIM | ATA_HORKAGE_NOLPM, },
- /* Sandisk devices which are known to not handle LPM well */ - { "SanDisk SD7UB3Q*G1001", NULL, ATA_HORKAGE_NOLPM, }, - /* devices that don't properly handle queued TRIM commands */ { "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | ATA_HORKAGE_ZERO_AFTER_TRIM, },
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stefan Potyra Stefan.Potyra@elektrobit.com
commit 955bc61328dc0a297fb3baccd84e9d3aee501ed8 upstream.
According to the API, you may only call clk_get_rate() after actually enabling it.
Found by Linux Driver Verification project (linuxtesting.org).
Fixes: a5fd9139f74c ("w1: add 1-wire master driver for i.MX27 / i.MX31") Signed-off-by: Stefan Potyra Stefan.Potyra@elektrobit.com Acked-by: Evgeniy Polyakov zbr@ioremap.net Cc: stable stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/w1/masters/mxc_w1.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-)
--- a/drivers/w1/masters/mxc_w1.c +++ b/drivers/w1/masters/mxc_w1.c @@ -112,6 +112,10 @@ static int mxc_w1_probe(struct platform_ if (IS_ERR(mdev->clk)) return PTR_ERR(mdev->clk);
+ err = clk_prepare_enable(mdev->clk); + if (err) + return err; + clkrate = clk_get_rate(mdev->clk); if (clkrate < 10000000) dev_warn(&pdev->dev, @@ -125,12 +129,10 @@ static int mxc_w1_probe(struct platform_
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); mdev->regs = devm_ioremap_resource(&pdev->dev, res); - if (IS_ERR(mdev->regs)) - return PTR_ERR(mdev->regs); - - err = clk_prepare_enable(mdev->clk); - if (err) - return err; + if (IS_ERR(mdev->regs)) { + err = PTR_ERR(mdev->regs); + goto out_disable_clk; + }
/* Software reset 1-Wire module */ writeb(MXC_W1_RESET_RST, mdev->regs + MXC_W1_RESET); @@ -146,8 +148,12 @@ static int mxc_w1_probe(struct platform_
err = w1_add_master_device(&mdev->bus_master); if (err) - clk_disable_unprepare(mdev->clk); + goto out_disable_clk; + + return 0;
+out_disable_clk: + clk_disable_unprepare(mdev->clk); return err; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tony Luck tony.luck@intel.com
commit 1d9f3e20a56d33e55748552aeec597f58542f92d upstream.
New stepping of Skylake has fixes for cache occupancy and memory bandwidth monitoring.
Update the code to enable these by default on newer steppings.
Signed-off-by: Tony Luck tony.luck@intel.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: Fenghua Yu fenghua.yu@intel.com Cc: stable@vger.kernel.org # v4.14 Cc: Vikas Shivappa vikas.shivappa@linux.intel.com Link: https://lkml.kernel.org/r/20180608160732.9842-1-tony.luck@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/cpu/intel_rdt.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/arch/x86/kernel/cpu/intel_rdt.c +++ b/arch/x86/kernel/cpu/intel_rdt.c @@ -773,6 +773,8 @@ static __init void rdt_quirks(void) case INTEL_FAM6_SKYLAKE_X: if (boot_cpu_data.x86_stepping <= 4) set_rdt_options("!cmt,!mbmtotal,!mbmlocal,!l3cat"); + else + set_rdt_options("!l3cat"); } }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luca Coelho luciano.coelho@intel.com
commit 9039d985811d5b109b58b202b7594fd24e433fed upstream.
The page loading code trusts the data provided in the firmware images a bit too much and may cause a buffer overflow or copy unknown data if the block sizes don't match what we expect.
To prevent potential problems, harden the code by checking if the sizes we are copying are what we expect.
Cc: stable@vger.kernel.org Signed-off-by: Luca Coelho luciano.coelho@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/intel/iwlwifi/fw/paging.c | 49 ++++++++++++++++++++----- 1 file changed, 41 insertions(+), 8 deletions(-)
--- a/drivers/net/wireless/intel/iwlwifi/fw/paging.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/paging.c @@ -8,6 +8,7 @@ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH * Copyright(c) 2016 - 2017 Intel Deutschland GmbH + * Copyright(c) 2018 Intel Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of version 2 of the GNU General Public License as @@ -30,6 +31,7 @@ * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH * Copyright(c) 2016 - 2017 Intel Deutschland GmbH + * Copyright(c) 2018 Intel Corporation * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -174,7 +176,7 @@ static int iwl_alloc_fw_paging_mem(struc static int iwl_fill_paging_mem(struct iwl_fw_runtime *fwrt, const struct fw_img *image) { - int sec_idx, idx; + int sec_idx, idx, ret; u32 offset = 0;
/* @@ -201,17 +203,23 @@ static int iwl_fill_paging_mem(struct iw */ if (sec_idx >= image->num_sec - 1) { IWL_ERR(fwrt, "Paging: Missing CSS and/or paging sections\n"); - iwl_free_fw_paging(fwrt); - return -EINVAL; + ret = -EINVAL; + goto err; }
/* copy the CSS block to the dram */ IWL_DEBUG_FW(fwrt, "Paging: load paging CSS to FW, sec = %d\n", sec_idx);
+ if (image->sec[sec_idx].len > fwrt->fw_paging_db[0].fw_paging_size) { + IWL_ERR(fwrt, "CSS block is larger than paging size\n"); + ret = -EINVAL; + goto err; + } + memcpy(page_address(fwrt->fw_paging_db[0].fw_paging_block), image->sec[sec_idx].data, - fwrt->fw_paging_db[0].fw_paging_size); + image->sec[sec_idx].len); dma_sync_single_for_device(fwrt->trans->dev, fwrt->fw_paging_db[0].fw_paging_phys, fwrt->fw_paging_db[0].fw_paging_size, @@ -232,6 +240,14 @@ static int iwl_fill_paging_mem(struct iw for (idx = 1; idx < fwrt->num_of_paging_blk; idx++) { struct iwl_fw_paging *block = &fwrt->fw_paging_db[idx];
+ if (block->fw_paging_size > image->sec[sec_idx].len - offset) { + IWL_ERR(fwrt, + "Paging: paging size is larger than remaining data in block %d\n", + idx); + ret = -EINVAL; + goto err; + } + memcpy(page_address(block->fw_paging_block), image->sec[sec_idx].data + offset, block->fw_paging_size); @@ -242,19 +258,32 @@ static int iwl_fill_paging_mem(struct iw
IWL_DEBUG_FW(fwrt, "Paging: copied %d paging bytes to block %d\n", - fwrt->fw_paging_db[idx].fw_paging_size, - idx); + block->fw_paging_size, idx); + + offset += block->fw_paging_size;
- offset += fwrt->fw_paging_db[idx].fw_paging_size; + if (offset > image->sec[sec_idx].len) { + IWL_ERR(fwrt, + "Paging: offset goes over section size\n"); + ret = -EINVAL; + goto err; + } }
/* copy the last paging block */ if (fwrt->num_of_pages_in_last_blk > 0) { struct iwl_fw_paging *block = &fwrt->fw_paging_db[idx];
+ if (image->sec[sec_idx].len - offset > block->fw_paging_size) { + IWL_ERR(fwrt, + "Paging: last block is larger than paging size\n"); + ret = -EINVAL; + goto err; + } + memcpy(page_address(block->fw_paging_block), image->sec[sec_idx].data + offset, - FW_PAGING_SIZE * fwrt->num_of_pages_in_last_blk); + image->sec[sec_idx].len - offset); dma_sync_single_for_device(fwrt->trans->dev, block->fw_paging_phys, block->fw_paging_size, @@ -266,6 +295,10 @@ static int iwl_fill_paging_mem(struct iw }
return 0; + +err: + iwl_free_fw_paging(fwrt); + return ret; }
static int iwl_save_fw_paging(struct iwl_fw_runtime *fwrt,
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Martin Brandenburg martin@omnibond.com
commit f6a4b4c9d07dda90c7c29dae96d6119ac6425dca upstream.
As long as a symlink inode remains in-core, the destination (and therefore size) will not be re-fetched from the server, as it cannot change. The original implementation of the attribute cache assumed that setting the expiry time in the past was sufficient to cause a re-fetch of all attributes on the next getattr. That does not work in this case.
The bug manifested itself as follows. When the command sequence
touch foo; ln -s foo bar; ls -l bar
is run, the output was
lrwxrwxrwx. 1 fedora fedora 4906 Apr 24 19:10 bar -> foo
However, after a re-mount, ls -l bar produces
lrwxrwxrwx. 1 fedora fedora 3 Apr 24 19:10 bar -> foo
After this commit, even before a re-mount, the output is
lrwxrwxrwx. 1 fedora fedora 3 Apr 24 19:10 bar -> foo
Reported-by: Becky Ligon ligon@clemson.edu Signed-off-by: Martin Brandenburg martin@omnibond.com Fixes: 71680c18c8f2 ("orangefs: Cache getattr results.") Cc: stable@vger.kernel.org Cc: hubcap@omnibond.com Signed-off-by: Mike Marshall hubcap@omnibond.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/orangefs/namei.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/fs/orangefs/namei.c +++ b/fs/orangefs/namei.c @@ -314,6 +314,13 @@ static int orangefs_symlink(struct inode ret = PTR_ERR(inode); goto out; } + /* + * This is necessary because orangefs_inode_getattr will not + * re-read symlink size as it is impossible for it to change. + * Invalidating the cache does not help. orangefs_new_inode + * does not set the correct size (it does not know symname). + */ + inode->i_size = strlen(symname);
gossip_debug(GOSSIP_NAME_DEBUG, "Assigned symlink inode new number of %pU\n",
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Martin Brandenburg martin@omnibond.com
commit 7f54910fa8dfe504f2e1563f4f6ddc3294dfbf3a upstream.
OrangeFS formerly failed to set attributes_mask with the result that software could not see immutable and append flags present in the filesystem.
Reported-by: Becky Ligon ligon@clemson.edu Signed-off-by: Martin Brandenburg martin@omnibond.com Fixes: 68a24a6cc4a6 ("orangefs: implement statx") Cc: stable@vger.kernel.org Cc: hubcap@omnibond.com Signed-off-by: Mike Marshall hubcap@omnibond.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/orangefs/inode.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/fs/orangefs/inode.c +++ b/fs/orangefs/inode.c @@ -269,6 +269,13 @@ int orangefs_getattr(const struct path * else stat->result_mask = STATX_BASIC_STATS & ~STATX_SIZE; + + stat->attributes_mask = STATX_ATTR_IMMUTABLE | + STATX_ATTR_APPEND; + if (inode->i_flags & S_IMMUTABLE) + stat->attributes |= STATX_ATTR_IMMUTABLE; + if (inode->i_flags & S_APPEND) + stat->attributes |= STATX_ATTR_APPEND; } return ret; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Even Xu even.xu@intel.com
commit ebeaa367548e9e92dd9374b9464ff6e7d157117b upstream.
Current ISH driver only registers suspend/resume PM callbacks which don't support hibernation (suspend to disk). Basically after hiberation, the ISH can't resume properly and user may not see sensor events (for example: screen rotation may not work).
User will not see a crash or panic or anything except the following message in log:
hid-sensor-hub 001F:8086:22D8.0001: timeout waiting for response from ISHTP device
So this patch adds support for S4/hiberbation to ISH by using the SIMPLE_DEV_PM_OPS() MACRO instead of struct dev_pm_ops directly. The suspend and resume functions will now be used for both suspend to RAM and hibernation.
If power management is disabled, SIMPLE_DEV_PM_OPS will do nothing, the suspend and resume related functions won't be used, so mark them as __maybe_unused to clarify that this is the intended behavior, and remove #ifdefs for power management.
Cc: stable@vger.kernel.org Signed-off-by: Even Xu even.xu@intel.com Acked-by: Srinivas Pandruvada srinivas.pandruvada@linux.intel.com Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/hid/intel-ish-hid/ipc/pci-ish.c | 22 +++++++--------------- 1 file changed, 7 insertions(+), 15 deletions(-)
--- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c +++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c @@ -204,8 +204,7 @@ static void ish_remove(struct pci_dev *p kfree(ishtp_dev); }
-#ifdef CONFIG_PM -static struct device *ish_resume_device; +static struct device __maybe_unused *ish_resume_device;
/* 50ms to get resume response */ #define WAIT_FOR_RESUME_ACK_MS 50 @@ -219,7 +218,7 @@ static struct device *ish_resume_device; * in that case a simple resume message is enough, others we need * a reset sequence. */ -static void ish_resume_handler(struct work_struct *work) +static void __maybe_unused ish_resume_handler(struct work_struct *work) { struct pci_dev *pdev = to_pci_dev(ish_resume_device); struct ishtp_device *dev = pci_get_drvdata(pdev); @@ -261,7 +260,7 @@ static void ish_resume_handler(struct wo * * Return: 0 to the pm core */ -static int ish_suspend(struct device *device) +static int __maybe_unused ish_suspend(struct device *device) { struct pci_dev *pdev = to_pci_dev(device); struct ishtp_device *dev = pci_get_drvdata(pdev); @@ -287,7 +286,7 @@ static int ish_suspend(struct device *de return 0; }
-static DECLARE_WORK(resume_work, ish_resume_handler); +static __maybe_unused DECLARE_WORK(resume_work, ish_resume_handler); /** * ish_resume() - ISH resume callback * @device: device pointer @@ -296,7 +295,7 @@ static DECLARE_WORK(resume_work, ish_res * * Return: 0 to the pm core */ -static int ish_resume(struct device *device) +static int __maybe_unused ish_resume(struct device *device) { struct pci_dev *pdev = to_pci_dev(device); struct ishtp_device *dev = pci_get_drvdata(pdev); @@ -310,21 +309,14 @@ static int ish_resume(struct device *dev return 0; }
-static const struct dev_pm_ops ish_pm_ops = { - .suspend = ish_suspend, - .resume = ish_resume, -}; -#define ISHTP_ISH_PM_OPS (&ish_pm_ops) -#else -#define ISHTP_ISH_PM_OPS NULL -#endif /* CONFIG_PM */ +static SIMPLE_DEV_PM_OPS(ish_pm_ops, ish_suspend, ish_resume);
static struct pci_driver ish_driver = { .name = KBUILD_MODNAME, .id_table = ish_pci_tbl, .probe = ish_probe, .remove = ish_remove, - .driver.pm = ISHTP_ISH_PM_OPS, + .driver.pm = &ish_pm_ops, };
module_pci_driver(ish_driver);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jason Gerecke killertofu@gmail.com
commit d471b6b22d37bf9928c6d0202bdaaf76583b8b61 upstream.
The HID descriptor for the 2nd-gen Intuos Pro large (PTH-860) contains a typo which defines an incorrect logical maximum Y value. This causes a small portion of the bottom of the tablet to become unusable (both because the area is below the "bottom" of the tablet and because 'wacom_wac_event' ignores out-of-range values). It also results in a skewed aspect ratio.
To fix this, we add a quirk to 'wacom_usage_mapping' which overwrites the data with the correct value.
Signed-off-by: Jason Gerecke jason.gerecke@wacom.com CC: stable@vger.kernel.org # v4.10+ Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/hid/wacom_sys.c | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/drivers/hid/wacom_sys.c +++ b/drivers/hid/wacom_sys.c @@ -284,6 +284,14 @@ static void wacom_usage_mapping(struct h } }
+ /* 2nd-generation Intuos Pro Large has incorrect Y maximum */ + if (hdev->vendor == USB_VENDOR_ID_WACOM && + hdev->product == 0x0358 && + WACOM_PEN_FIELD(field) && + wacom_equivalent_usage(usage->hid) == HID_GD_Y) { + field->logical_maximum = 43200; + } + switch (usage->hid) { case HID_GD_X: features->x_max = field->logical_maximum;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael S. Tsirkin mst@redhat.com
commit 670ae9caaca467ea1bfd325cb2a5c98ba87f94ad upstream.
struct vhost_msg within struct vhost_msg_node is copied to userspace. Unfortunately it turns out on 64 bit systems vhost_msg has padding after type which gcc doesn't initialize, leaking 4 uninitialized bytes to userspace.
This padding also unfortunately means 32 bit users of this interface are broken on a 64 bit kernel which will need to be fixed separately.
Fixes: CVE-2018-1118 Cc: stable@vger.kernel.org Reported-by: Kevin Easton kevin@guarana.org Signed-off-by: Michael S. Tsirkin mst@redhat.com Reported-by: syzbot+87cfa083e727a224754b@syzkaller.appspotmail.com Signed-off-by: Michael S. Tsirkin mst@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/vhost/vhost.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -2382,6 +2382,9 @@ struct vhost_msg_node *vhost_new_msg(str struct vhost_msg_node *node = kmalloc(sizeof *node, GFP_KERNEL); if (!node) return NULL; + + /* Make sure all padding within the structure is initialized. */ + memset(&node->msg, 0, sizeof node->msg); node->vq = vq; node->msg.type = type; return node;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thadeu Lima de Souza Cascardo cascardo@canonical.com
commit 5cc41e099504b77014358b58567c5ea6293dd220 upstream.
WHen registering a new binfmt_misc handler, it is possible to overflow the offset to get a negative value, which might crash the system, or possibly leak kernel data.
Here is a crash log when 2500000000 was used as an offset:
BUG: unable to handle kernel paging request at ffff989cfd6edca0 IP: load_misc_binary+0x22b/0x470 [binfmt_misc] PGD 1ef3e067 P4D 1ef3e067 PUD 0 Oops: 0000 [#1] SMP NOPTI Modules linked in: binfmt_misc kvm_intel ppdev kvm irqbypass joydev input_leds serio_raw mac_hid parport_pc qemu_fw_cfg parpy CPU: 0 PID: 2499 Comm: bash Not tainted 4.15.0-22-generic #24-Ubuntu Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.1-1 04/01/2014 RIP: 0010:load_misc_binary+0x22b/0x470 [binfmt_misc] Call Trace: search_binary_handler+0x97/0x1d0 do_execveat_common.isra.34+0x667/0x810 SyS_execve+0x31/0x40 do_syscall_64+0x73/0x130 entry_SYSCALL_64_after_hwframe+0x3d/0xa2
Use kstrtoint instead of simple_strtoul. It will work as the code already set the delimiter byte to '\0' and we only do it when the field is not empty.
Tested with offsets -1, 2500000000, UINT_MAX and INT_MAX. Also tested with examples documented at Documentation/admin-guide/binfmt-misc.rst and other registrations from packages on Ubuntu.
Link: http://lkml.kernel.org/r/20180529135648.14254-1-cascardo@canonical.com Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Thadeu Lima de Souza Cascardo cascardo@canonical.com Reviewed-by: Andrew Morton akpm@linux-foundation.org Cc: Alexander Viro viro@zeniv.linux.org.uk Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/binfmt_misc.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-)
--- a/fs/binfmt_misc.c +++ b/fs/binfmt_misc.c @@ -387,8 +387,13 @@ static Node *create_entry(const char __u s = strchr(p, del); if (!s) goto einval; - *s++ = '\0'; - e->offset = simple_strtoul(p, &p, 10); + *s = '\0'; + if (p != s) { + int r = kstrtoint(p, 10, &e->offset); + if (r != 0 || e->offset < 0) + goto einval; + } + p = s; if (*p++) goto einval; pr_debug("register: offset: %#x\n", e->offset); @@ -428,7 +433,8 @@ static Node *create_entry(const char __u if (e->mask && string_unescape_inplace(e->mask, UNESCAPE_HEX) != e->size) goto einval; - if (e->size + e->offset > BINPRM_BUF_SIZE) + if (e->size > BINPRM_BUF_SIZE || + BINPRM_BUF_SIZE - e->size < e->offset) goto einval; pr_debug("register: magic/mask length: %i\n", e->size); if (USE_DEBUG) {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vlastimil Babka vbabka@suse.cz
commit 7810e6781e0fcbca78b91cf65053f895bf59e85f upstream.
In __alloc_pages_slowpath() we reset zonelist and preferred_zoneref for allocations that can ignore memory policies. The zonelist is obtained from current CPU's node. This is a problem for __GFP_THISNODE allocations that want to allocate on a different node, e.g. because the allocating thread has been migrated to a different CPU.
This has been observed to break SLAB in our 4.4-based kernel, because there it relies on __GFP_THISNODE working as intended. If a slab page is put on wrong node's list, then further list manipulations may corrupt the list because page_to_nid() is used to determine which node's list_lock should be locked and thus we may take a wrong lock and race.
Current SLAB implementation seems to be immune by luck thanks to commit 511e3a058812 ("mm/slab: make cache_grow() handle the page allocated on arbitrary node") but there may be others assuming that __GFP_THISNODE works as promised.
We can fix it by simply removing the zonelist reset completely. There is actually no reason to reset it, because memory policies and cpusets don't affect the zonelist choice in the first place. This was different when commit 183f6371aac2 ("mm: ignore mempolicies when using ALLOC_NO_WATERMARK") introduced the code, as mempolicies provided their own restricted zonelists.
We might consider this for 4.17 although I don't know if there's anything currently broken.
SLAB is currently not affected, but in kernels older than 4.7 that don't yet have 511e3a058812 ("mm/slab: make cache_grow() handle the page allocated on arbitrary node") it is. That's at least 4.4 LTS. Older ones I'll have to check.
So stable backports should be more important, but will have to be reviewed carefully, as the code went through many changes. BTW I think that also the ac->preferred_zoneref reset is currently useless if we don't also reset ac->nodemask from a mempolicy to NULL first (which we probably should for the OOM victims etc?), but I would leave that for a separate patch.
Link: http://lkml.kernel.org/r/20180525130853.13915-1-vbabka@suse.cz Signed-off-by: Vlastimil Babka vbabka@suse.cz Fixes: 183f6371aac2 ("mm: ignore mempolicies when using ALLOC_NO_WATERMARK") Acked-by: Mel Gorman mgorman@techsingularity.net Cc: Michal Hocko mhocko@kernel.org Cc: David Rientjes rientjes@google.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Vlastimil Babka vbabka@suse.cz Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- mm/page_alloc.c | 1 - 1 file changed, 1 deletion(-)
--- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3981,7 +3981,6 @@ retry: * orientated. */ if (!(alloc_flags & ALLOC_CPUSET) || reserve_flags) { - ac->zonelist = node_zonelist(numa_node_id(), gfp_mask); ac->preferred_zoneref = first_zones_zonelist(ac->zonelist, ac->high_zoneidx, ac->nodemask); }
On 24 June 2018 at 20:50, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 4.14.52 release. There are 52 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Tue Jun 26 14:27:24 UTC 2018. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.52-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64 and arm.
Kselftest test case mov_ss_trap_64 is causing kernel panic on qemu-system-x86_64 and PASS on real x86_64 hardware.
Reported upstream, https://lkml.org/lkml/2018/6/25/19
Summary ------------------------------------------------------------------------
kernel: 4.14.52-rc1 git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git git branch: linux-4.14.y git commit: 197647acb3aff4711772a6cda73187b3031c57f3 git describe: v4.14.51-53-g197647acb3af Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-4.14-oe/build%5C /v4.14.51-53-g197647acb3af ^ please join link
No regressions (compared to build v4.14.51-12-g6c12052ba18a)
Ran 11468 total tests in the following environments and test suites.
Environments -------------- - dragonboard-410c - arm64 - hi6220-hikey - arm64 - juno-r2 - arm64 - qemu_arm - qemu_arm64 - qemu_x86_64 - x15 - arm - x86_64
Test Suites ----------- * boot * kselftest * libhugetlbfs * ltp-cap_bounds-tests * ltp-containers-tests * ltp-cve-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-hugetlb-tests * ltp-io-tests * ltp-ipc-tests * ltp-math-tests * ltp-nptl-tests * ltp-pty-tests * ltp-sched-tests * ltp-securebits-tests * ltp-syscalls-tests * ltp-timers-tests * kselftest-vsyscall-mode-native * kselftest-vsyscall-mode-none
On Sun, Jun 24, 2018 at 11:20:53PM +0800, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.14.52 release. There are 52 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Tue Jun 26 14:27:24 UTC 2018. Anything received after that time might be too late.
Build results: total: 148 pass: 148 fail: 0 Qemu test results: total: 156 pass: 156 fail: 0
Details are available at http://kerneltests.org/builders.
Guenter
linux-stable-mirror@lists.linaro.org