This is a note to let you know that I've just added the patch titled
selinux: ensure the context is NUL terminated in security_context_to_sid_core()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
selinux-ensure-the-context-is-nul-terminated-in-security_context_to_sid_core.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From ef28df55ac27e1e5cd122e19fa311d886d47a756 Mon Sep 17 00:00:00 2001
From: Paul Moore <paul(a)paul-moore.com>
Date: Tue, 28 Nov 2017 18:51:12 -0500
Subject: selinux: ensure the context is NUL terminated in security_context_to_sid_core()
From: Paul Moore <paul(a)paul-moore.com>
commit ef28df55ac27e1e5cd122e19fa311d886d47a756 upstream.
The syzbot/syzkaller automated tests found a problem in
security_context_to_sid_core() during early boot (before we load the
SELinux policy) where we could potentially feed context strings without
NUL terminators into the strcmp() function.
We already guard against this during normal operation (after the SELinux
policy has been loaded) by making a copy of the context strings and
explicitly adding a NUL terminator to the end. The patch extends this
protection to the early boot case (no loaded policy) by moving the context
copy earlier in security_context_to_sid_core().
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Paul Moore <paul(a)paul-moore.com>
Reviewed-By: William Roberts <william.c.roberts(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
security/selinux/ss/services.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
--- a/security/selinux/ss/services.c
+++ b/security/selinux/ss/services.c
@@ -1413,27 +1413,25 @@ static int security_context_to_sid_core(
if (!scontext_len)
return -EINVAL;
+ /* Copy the string to allow changes and ensure a NUL terminator */
+ scontext2 = kmemdup_nul(scontext, scontext_len, gfp_flags);
+ if (!scontext2)
+ return -ENOMEM;
+
if (!ss_initialized) {
int i;
for (i = 1; i < SECINITSID_NUM; i++) {
- if (!strcmp(initial_sid_to_string[i], scontext)) {
+ if (!strcmp(initial_sid_to_string[i], scontext2)) {
*sid = i;
- return 0;
+ goto out;
}
}
*sid = SECINITSID_KERNEL;
- return 0;
+ goto out;
}
*sid = SECSID_NULL;
- /* Copy the string so that we can modify the copy as we parse it. */
- scontext2 = kmalloc(scontext_len + 1, gfp_flags);
- if (!scontext2)
- return -ENOMEM;
- memcpy(scontext2, scontext, scontext_len);
- scontext2[scontext_len] = 0;
-
if (force) {
/* Save another copy for storing in uninterpreted form */
rc = -ENOMEM;
Patches currently in stable-queue which might be from paul(a)paul-moore.com are
queue-4.14/selinux-skip-bounded-transition-processing-if-the-policy-isn-t-loaded.patch
queue-4.14/selinux-ensure-the-context-is-nul-terminated-in-security_context_to_sid_core.patch
This is a note to let you know that I've just added the patch titled
rds: tcp: correctly sequence cleanup on netns deletion.
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
rds-tcp-correctly-sequence-cleanup-on-netns-deletion.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 681648e67d43cf269c5590ecf021ed481f4551fc Mon Sep 17 00:00:00 2001
From: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
Date: Thu, 30 Nov 2017 11:11:28 -0800
Subject: rds: tcp: correctly sequence cleanup on netns deletion.
From: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
commit 681648e67d43cf269c5590ecf021ed481f4551fc upstream.
Commit 8edc3affc077 ("rds: tcp: Take explicit refcounts on struct net")
introduces a regression in rds-tcp netns cleanup. The cleanup_net(),
(and thus rds_tcp_dev_event notification) is only called from put_net()
when all netns refcounts go to 0, but this cannot happen if the
rds_connection itself is holding a c_net ref that it expects to
release in rds_tcp_kill_sock.
Instead, the rds_tcp_kill_sock callback should make sure to
tear down state carefully, ensuring that the socket teardown
is only done after all data-structures and workqs that depend
on it are quiesced.
The original motivation for commit 8edc3affc077 ("rds: tcp: Take explicit
refcounts on struct net") was to resolve a race condition reported by
syzkaller where workqs for tx/rx/connect were triggered after the
namespace was deleted. Those worker threads should have been
cancelled/flushed before socket tear-down and indeed,
rds_conn_path_destroy() does try to sequence this by doing
/* cancel cp_send_w */
/* cancel cp_recv_w */
/* flush cp_down_w */
/* free data structures */
Here the "flush cp_down_w" will trigger rds_conn_shutdown and thus
invoke rds_tcp_conn_path_shutdown() to close the tcp socket, so that
we ought to have satisfied the requirement that "socket-close is
done after all other dependent state is quiesced". However,
rds_conn_shutdown has a bug in that it *always* triggers the reconnect
workq (and if connection is successful, we always restart tx/rx
workqs so with the right timing, we risk the race conditions reported
by syzkaller).
Netns deletion is like module teardown- no need to restart a
reconnect in this case. We can use the c_destroy_in_prog bit
to avoid restarting the reconnect.
Fixes: 8edc3affc077 ("rds: tcp: Take explicit refcounts on struct net")
Signed-off-by: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar(a)oracle.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/rds/connection.c | 3 ++-
net/rds/rds.h | 6 +++---
net/rds/tcp.c | 4 ++--
3 files changed, 7 insertions(+), 6 deletions(-)
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -366,6 +366,8 @@ void rds_conn_shutdown(struct rds_conn_p
* to the conn hash, so we never trigger a reconnect on this
* conn - the reconnect is always triggered by the active peer. */
cancel_delayed_work_sync(&cp->cp_conn_w);
+ if (conn->c_destroy_in_prog)
+ return;
rcu_read_lock();
if (!hlist_unhashed(&conn->c_hash_node)) {
rcu_read_unlock();
@@ -445,7 +447,6 @@ void rds_conn_destroy(struct rds_connect
*/
rds_cong_remove_conn(conn);
- put_net(conn->c_net);
kfree(conn->c_path);
kmem_cache_free(rds_conn_slab, conn);
--- a/net/rds/rds.h
+++ b/net/rds/rds.h
@@ -150,7 +150,7 @@ struct rds_connection {
/* Protocol version */
unsigned int c_version;
- struct net *c_net;
+ possible_net_t c_net;
struct list_head c_map_item;
unsigned long c_map_queued;
@@ -165,13 +165,13 @@ struct rds_connection {
static inline
struct net *rds_conn_net(struct rds_connection *conn)
{
- return conn->c_net;
+ return read_pnet(&conn->c_net);
}
static inline
void rds_conn_net_set(struct rds_connection *conn, struct net *net)
{
- conn->c_net = get_net(net);
+ write_pnet(&conn->c_net, net);
}
#define RDS_FLAG_CONG_BITMAP 0x01
--- a/net/rds/tcp.c
+++ b/net/rds/tcp.c
@@ -527,7 +527,7 @@ static void rds_tcp_kill_sock(struct net
rds_tcp_listen_stop(lsock, &rtn->rds_tcp_accept_w);
spin_lock_irq(&rds_tcp_conn_lock);
list_for_each_entry_safe(tc, _tc, &rds_tcp_conn_list, t_tcp_node) {
- struct net *c_net = tc->t_cpath->cp_conn->c_net;
+ struct net *c_net = read_pnet(&tc->t_cpath->cp_conn->c_net);
if (net != c_net || !tc->t_sock)
continue;
@@ -586,7 +586,7 @@ static void rds_tcp_sysctl_reset(struct
spin_lock_irq(&rds_tcp_conn_lock);
list_for_each_entry_safe(tc, _tc, &rds_tcp_conn_list, t_tcp_node) {
- struct net *c_net = tc->t_cpath->cp_conn->c_net;
+ struct net *c_net = read_pnet(&tc->t_cpath->cp_conn->c_net);
if (net != c_net || !tc->t_sock)
continue;
Patches currently in stable-queue which might be from sowmini.varadhan(a)oracle.com are
queue-4.14/rds-tcp-atomically-purge-entries-from-rds_tcp_conn_list-during-netns-delete.patch
queue-4.14/rds-tcp-correctly-sequence-cleanup-on-netns-deletion.patch
This is a note to let you know that I've just added the patch titled
rds: tcp: atomically purge entries from rds_tcp_conn_list during netns delete
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
rds-tcp-atomically-purge-entries-from-rds_tcp_conn_list-during-netns-delete.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From f10b4cff98c6977668434fbf5dd58695eeca2897 Mon Sep 17 00:00:00 2001
From: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
Date: Thu, 30 Nov 2017 11:11:29 -0800
Subject: rds: tcp: atomically purge entries from rds_tcp_conn_list during netns delete
From: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
commit f10b4cff98c6977668434fbf5dd58695eeca2897 upstream.
The rds_tcp_kill_sock() function parses the rds_tcp_conn_list
to find the rds_connection entries marked for deletion as part
of the netns deletion under the protection of the rds_tcp_conn_lock.
Since the rds_tcp_conn_list tracks rds_tcp_connections (which
have a 1:1 mapping with rds_conn_path), multiple tc entries in
the rds_tcp_conn_list will map to a single rds_connection, and will
be deleted as part of the rds_conn_destroy() operation that is
done outside the rds_tcp_conn_lock.
The rds_tcp_conn_list traversal done under the protection of
rds_tcp_conn_lock should not leave any doomed tc entries in
the list after the rds_tcp_conn_lock is released, else another
concurrently executiong netns delete (for a differnt netns) thread
may trip on these entries.
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar(a)oracle.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/rds/tcp.c | 9 +++++++--
net/rds/tcp.h | 1 +
2 files changed, 8 insertions(+), 2 deletions(-)
--- a/net/rds/tcp.c
+++ b/net/rds/tcp.c
@@ -306,7 +306,8 @@ static void rds_tcp_conn_free(void *arg)
rdsdebug("freeing tc %p\n", tc);
spin_lock_irqsave(&rds_tcp_conn_lock, flags);
- list_del(&tc->t_tcp_node);
+ if (!tc->t_tcp_node_detached)
+ list_del(&tc->t_tcp_node);
spin_unlock_irqrestore(&rds_tcp_conn_lock, flags);
kmem_cache_free(rds_tcp_conn_slab, tc);
@@ -531,8 +532,12 @@ static void rds_tcp_kill_sock(struct net
if (net != c_net || !tc->t_sock)
continue;
- if (!list_has_conn(&tmp_list, tc->t_cpath->cp_conn))
+ if (!list_has_conn(&tmp_list, tc->t_cpath->cp_conn)) {
list_move_tail(&tc->t_tcp_node, &tmp_list);
+ } else {
+ list_del(&tc->t_tcp_node);
+ tc->t_tcp_node_detached = true;
+ }
}
spin_unlock_irq(&rds_tcp_conn_lock);
list_for_each_entry_safe(tc, _tc, &tmp_list, t_tcp_node) {
--- a/net/rds/tcp.h
+++ b/net/rds/tcp.h
@@ -12,6 +12,7 @@ struct rds_tcp_incoming {
struct rds_tcp_connection {
struct list_head t_tcp_node;
+ bool t_tcp_node_detached;
struct rds_conn_path *t_cpath;
/* t_conn_path_lock synchronizes the connection establishment between
* rds_tcp_accept_one and rds_tcp_conn_path_connect
Patches currently in stable-queue which might be from sowmini.varadhan(a)oracle.com are
queue-4.14/rds-tcp-atomically-purge-entries-from-rds_tcp_conn_list-during-netns-delete.patch
queue-4.14/rds-tcp-correctly-sequence-cleanup-on-netns-deletion.patch
This is a note to let you know that I've just added the patch titled
ptr_ring: try vmalloc() when kmalloc() fails
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
ptr_ring-try-vmalloc-when-kmalloc-fails.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 0bf7800f1799b5b1fd7d4f024e9ece53ac489011 Mon Sep 17 00:00:00 2001
From: Jason Wang <jasowang(a)redhat.com>
Date: Fri, 9 Feb 2018 17:45:50 +0800
Subject: ptr_ring: try vmalloc() when kmalloc() fails
From: Jason Wang <jasowang(a)redhat.com>
commit 0bf7800f1799b5b1fd7d4f024e9ece53ac489011 upstream.
This patch switch to use kvmalloc_array() for using a vmalloc()
fallback to help in case kmalloc() fails.
Reported-by: syzbot+e4d4f9ddd4295539735d(a)syzkaller.appspotmail.com
Fixes: 2e0ab8ca83c12 ("ptr_ring: array based FIFO for pointers")
Signed-off-by: Jason Wang <jasowang(a)redhat.com>
Acked-by: Michael S. Tsirkin <mst(a)redhat.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/ptr_ring.h | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
--- a/include/linux/ptr_ring.h
+++ b/include/linux/ptr_ring.h
@@ -445,11 +445,14 @@ static inline int ptr_ring_consume_batch
__PTR_RING_PEEK_CALL_v; \
})
+/* Not all gfp_t flags (besides GFP_KERNEL) are allowed. See
+ * documentation for vmalloc for which of them are legal.
+ */
static inline void **__ptr_ring_init_queue_alloc(unsigned int size, gfp_t gfp)
{
if (size * sizeof(void *) > KMALLOC_MAX_SIZE)
return NULL;
- return kcalloc(size, sizeof(void *), gfp);
+ return kvmalloc_array(size, sizeof(void *), gfp | __GFP_ZERO);
}
static inline void __ptr_ring_set_size(struct ptr_ring *r, int size)
@@ -582,7 +585,7 @@ static inline int ptr_ring_resize(struct
spin_unlock(&(r)->producer_lock);
spin_unlock_irqrestore(&(r)->consumer_lock, flags);
- kfree(old);
+ kvfree(old);
return 0;
}
@@ -622,7 +625,7 @@ static inline int ptr_ring_resize_multip
}
for (i = 0; i < nrings; ++i)
- kfree(queues[i]);
+ kvfree(queues[i]);
kfree(queues);
@@ -630,7 +633,7 @@ static inline int ptr_ring_resize_multip
nomem:
while (--i >= 0)
- kfree(queues[i]);
+ kvfree(queues[i]);
kfree(queues);
@@ -645,7 +648,7 @@ static inline void ptr_ring_cleanup(stru
if (destroy)
while ((ptr = ptr_ring_consume(r)))
destroy(ptr);
- kfree(r->queue);
+ kvfree(r->queue);
}
#endif /* _LINUX_PTR_RING_H */
Patches currently in stable-queue which might be from jasowang(a)redhat.com are
queue-4.14/ptr_ring-try-vmalloc-when-kmalloc-fails.patch
queue-4.14/vhost-use-mutex_lock_nested-in-vhost_dev_lock_vqs.patch
queue-4.14/ptr_ring-fail-early-if-queue-occupies-more-than-kmalloc_max_size.patch
This is a note to let you know that I've just added the patch titled
netfilter: xt_RATEEST: acquire xt_rateest_mutex for hash insert
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netfilter-xt_rateest-acquire-xt_rateest_mutex-for-hash-insert.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 7dc68e98757a8eccf8ca7a53a29b896f1eef1f76 Mon Sep 17 00:00:00 2001
From: Cong Wang <xiyou.wangcong(a)gmail.com>
Date: Mon, 5 Feb 2018 14:41:45 -0800
Subject: netfilter: xt_RATEEST: acquire xt_rateest_mutex for hash insert
From: Cong Wang <xiyou.wangcong(a)gmail.com>
commit 7dc68e98757a8eccf8ca7a53a29b896f1eef1f76 upstream.
rateest_hash is supposed to be protected by xt_rateest_mutex,
and, as suggested by Eric, lookup and insert should be atomic,
so we should acquire the xt_rateest_mutex once for both.
So introduce a non-locking helper for internal use and keep the
locking one for external.
Reported-by: <syzbot+5cb189720978275e4c75(a)syzkaller.appspotmail.com>
Fixes: 5859034d7eb8 ("[NETFILTER]: x_tables: add RATEEST target")
Signed-off-by: Cong Wang <xiyou.wangcong(a)gmail.com>
Reviewed-by: Florian Westphal <fw(a)strlen.de>
Reviewed-by: Eric Dumazet <edumazet(a)google.com>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netfilter/xt_RATEEST.c | 22 +++++++++++++++++-----
1 file changed, 17 insertions(+), 5 deletions(-)
--- a/net/netfilter/xt_RATEEST.c
+++ b/net/netfilter/xt_RATEEST.c
@@ -39,23 +39,31 @@ static void xt_rateest_hash_insert(struc
hlist_add_head(&est->list, &rateest_hash[h]);
}
-struct xt_rateest *xt_rateest_lookup(const char *name)
+static struct xt_rateest *__xt_rateest_lookup(const char *name)
{
struct xt_rateest *est;
unsigned int h;
h = xt_rateest_hash(name);
- mutex_lock(&xt_rateest_mutex);
hlist_for_each_entry(est, &rateest_hash[h], list) {
if (strcmp(est->name, name) == 0) {
est->refcnt++;
- mutex_unlock(&xt_rateest_mutex);
return est;
}
}
- mutex_unlock(&xt_rateest_mutex);
+
return NULL;
}
+
+struct xt_rateest *xt_rateest_lookup(const char *name)
+{
+ struct xt_rateest *est;
+
+ mutex_lock(&xt_rateest_mutex);
+ est = __xt_rateest_lookup(name);
+ mutex_unlock(&xt_rateest_mutex);
+ return est;
+}
EXPORT_SYMBOL_GPL(xt_rateest_lookup);
void xt_rateest_put(struct xt_rateest *est)
@@ -100,8 +108,10 @@ static int xt_rateest_tg_checkentry(cons
net_get_random_once(&jhash_rnd, sizeof(jhash_rnd));
- est = xt_rateest_lookup(info->name);
+ mutex_lock(&xt_rateest_mutex);
+ est = __xt_rateest_lookup(info->name);
if (est) {
+ mutex_unlock(&xt_rateest_mutex);
/*
* If estimator parameters are specified, they must match the
* existing estimator.
@@ -139,11 +149,13 @@ static int xt_rateest_tg_checkentry(cons
info->est = est;
xt_rateest_hash_insert(est);
+ mutex_unlock(&xt_rateest_mutex);
return 0;
err2:
kfree(est);
err1:
+ mutex_unlock(&xt_rateest_mutex);
return ret;
}
Patches currently in stable-queue which might be from xiyou.wangcong(a)gmail.com are
queue-4.14/net_sched-gen_estimator-fix-lockdep-splat.patch
queue-4.14/netfilter-xt_cgroup-initialize-info-priv-in-cgroup_mt_check_v1.patch
queue-4.14/netfilter-xt_rateest-acquire-xt_rateest_mutex-for-hash-insert.patch
queue-4.14/xfrm-check-id-proto-in-validate_tmpl.patch
This is a note to let you know that I've just added the patch titled
ptr_ring: fail early if queue occupies more than KMALLOC_MAX_SIZE
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
ptr_ring-fail-early-if-queue-occupies-more-than-kmalloc_max_size.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6e6e41c3112276288ccaf80c70916779b84bb276 Mon Sep 17 00:00:00 2001
From: Jason Wang <jasowang(a)redhat.com>
Date: Fri, 9 Feb 2018 17:45:49 +0800
Subject: ptr_ring: fail early if queue occupies more than KMALLOC_MAX_SIZE
From: Jason Wang <jasowang(a)redhat.com>
commit 6e6e41c3112276288ccaf80c70916779b84bb276 upstream.
To avoid slab to warn about exceeded size, fail early if queue
occupies more than KMALLOC_MAX_SIZE.
Reported-by: syzbot+e4d4f9ddd4295539735d(a)syzkaller.appspotmail.com
Fixes: 2e0ab8ca83c12 ("ptr_ring: array based FIFO for pointers")
Signed-off-by: Jason Wang <jasowang(a)redhat.com>
Acked-by: Michael S. Tsirkin <mst(a)redhat.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/ptr_ring.h | 2 ++
1 file changed, 2 insertions(+)
--- a/include/linux/ptr_ring.h
+++ b/include/linux/ptr_ring.h
@@ -447,6 +447,8 @@ static inline int ptr_ring_consume_batch
static inline void **__ptr_ring_init_queue_alloc(unsigned int size, gfp_t gfp)
{
+ if (size * sizeof(void *) > KMALLOC_MAX_SIZE)
+ return NULL;
return kcalloc(size, sizeof(void *), gfp);
}
Patches currently in stable-queue which might be from jasowang(a)redhat.com are
queue-4.14/ptr_ring-try-vmalloc-when-kmalloc-fails.patch
queue-4.14/vhost-use-mutex_lock_nested-in-vhost_dev_lock_vqs.patch
queue-4.14/ptr_ring-fail-early-if-queue-occupies-more-than-kmalloc_max_size.patch
This is a note to let you know that I've just added the patch titled
netfilter: x_tables: avoid out-of-bounds reads in xt_request_find_{match|target}
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netfilter-x_tables-avoid-out-of-bounds-reads-in-xt_request_find_-match-target.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From da17c73b6eb74aad3c3c0654394635675b623b3e Mon Sep 17 00:00:00 2001
From: Eric Dumazet <edumazet(a)google.com>
Date: Wed, 24 Jan 2018 17:16:09 -0800
Subject: netfilter: x_tables: avoid out-of-bounds reads in xt_request_find_{match|target}
From: Eric Dumazet <edumazet(a)google.com>
commit da17c73b6eb74aad3c3c0654394635675b623b3e upstream.
It looks like syzbot found its way into netfilter territory.
Issue here is that @name comes from user space and might
not be null terminated.
Out-of-bound reads happen, KASAN is not happy.
v2 added similar fix for xt_request_find_target(),
as Florian advised.
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Acked-by: Florian Westphal <fw(a)strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netfilter/x_tables.c | 6 ++++++
1 file changed, 6 insertions(+)
--- a/net/netfilter/x_tables.c
+++ b/net/netfilter/x_tables.c
@@ -209,6 +209,9 @@ xt_request_find_match(uint8_t nfproto, c
{
struct xt_match *match;
+ if (strnlen(name, XT_EXTENSION_MAXNAMELEN) == XT_EXTENSION_MAXNAMELEN)
+ return ERR_PTR(-EINVAL);
+
match = xt_find_match(nfproto, name, revision);
if (IS_ERR(match)) {
request_module("%st_%s", xt_prefix[nfproto], name);
@@ -251,6 +254,9 @@ struct xt_target *xt_request_find_target
{
struct xt_target *target;
+ if (strnlen(name, XT_EXTENSION_MAXNAMELEN) == XT_EXTENSION_MAXNAMELEN)
+ return ERR_PTR(-EINVAL);
+
target = xt_find_target(af, name, revision);
if (IS_ERR(target)) {
request_module("%st_%s", xt_prefix[af], name);
Patches currently in stable-queue which might be from edumazet(a)google.com are
queue-4.14/kcm-check-if-sk_user_data-already-set-in-kcm_attach.patch
queue-4.14/net_sched-gen_estimator-fix-lockdep-splat.patch
queue-4.14/netfilter-x_tables-avoid-out-of-bounds-reads-in-xt_request_find_-match-target.patch
queue-4.14/kcm-only-allow-tcp-sockets-to-be-attached-to-a-kcm-mux.patch
queue-4.14/netfilter-xt_rateest-acquire-xt_rateest_mutex-for-hash-insert.patch
This is a note to let you know that I've just added the patch titled
netfilter: xt_cgroup: initialize info->priv in cgroup_mt_check_v1()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netfilter-xt_cgroup-initialize-info-priv-in-cgroup_mt_check_v1.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From ba7cd5d95f25cc6005f687dabdb4e7a6063adda9 Mon Sep 17 00:00:00 2001
From: Cong Wang <xiyou.wangcong(a)gmail.com>
Date: Wed, 31 Jan 2018 15:02:47 -0800
Subject: netfilter: xt_cgroup: initialize info->priv in cgroup_mt_check_v1()
From: Cong Wang <xiyou.wangcong(a)gmail.com>
commit ba7cd5d95f25cc6005f687dabdb4e7a6063adda9 upstream.
xt_cgroup_info_v1->priv is an internal pointer only used for kernel,
we should not trust what user-space provides.
Reported-by: <syzbot+4fbcfcc0d2e6592bd641(a)syzkaller.appspotmail.com>
Fixes: c38c4597e4bf ("netfilter: implement xt_cgroup cgroup2 path match")
Cc: Pablo Neira Ayuso <pablo(a)netfilter.org>
Signed-off-by: Cong Wang <xiyou.wangcong(a)gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netfilter/xt_cgroup.c | 1 +
1 file changed, 1 insertion(+)
--- a/net/netfilter/xt_cgroup.c
+++ b/net/netfilter/xt_cgroup.c
@@ -52,6 +52,7 @@ static int cgroup_mt_check_v1(const stru
return -EINVAL;
}
+ info->priv = NULL;
if (info->has_path) {
cgrp = cgroup_get_from_path(info->path);
if (IS_ERR(cgrp)) {
Patches currently in stable-queue which might be from xiyou.wangcong(a)gmail.com are
queue-4.14/net_sched-gen_estimator-fix-lockdep-splat.patch
queue-4.14/netfilter-xt_cgroup-initialize-info-priv-in-cgroup_mt_check_v1.patch
queue-4.14/netfilter-xt_rateest-acquire-xt_rateest_mutex-for-hash-insert.patch
queue-4.14/xfrm-check-id-proto-in-validate_tmpl.patch
This is a note to let you know that I've just added the patch titled
netfilter: x_tables: fix int overflow in xt_alloc_table_info()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netfilter-x_tables-fix-int-overflow-in-xt_alloc_table_info.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 889c604fd0b5f6d3b8694ade229ee44124de1127 Mon Sep 17 00:00:00 2001
From: Dmitry Vyukov <dvyukov(a)google.com>
Date: Thu, 28 Dec 2017 09:48:54 +0100
Subject: netfilter: x_tables: fix int overflow in xt_alloc_table_info()
From: Dmitry Vyukov <dvyukov(a)google.com>
commit 889c604fd0b5f6d3b8694ade229ee44124de1127 upstream.
syzkaller triggered OOM kills by passing ipt_replace.size = -1
to IPT_SO_SET_REPLACE. The root cause is that SMP_ALIGN() in
xt_alloc_table_info() causes int overflow and the size check passes
when it should not. SMP_ALIGN() is no longer needed leftover.
Remove SMP_ALIGN() call in xt_alloc_table_info().
Reported-by: syzbot+4396883fa8c4f64e0175(a)syzkaller.appspotmail.com
Signed-off-by: Dmitry Vyukov <dvyukov(a)google.com>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netfilter/x_tables.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/net/netfilter/x_tables.c
+++ b/net/netfilter/x_tables.c
@@ -39,7 +39,6 @@ MODULE_LICENSE("GPL");
MODULE_AUTHOR("Harald Welte <laforge(a)netfilter.org>");
MODULE_DESCRIPTION("{ip,ip6,arp,eb}_tables backend module");
-#define SMP_ALIGN(x) (((x) + SMP_CACHE_BYTES-1) & ~(SMP_CACHE_BYTES-1))
#define XT_PCPU_BLOCK_SIZE 4096
struct compat_delta {
@@ -1000,7 +999,7 @@ struct xt_table_info *xt_alloc_table_inf
return NULL;
/* Pedantry: prevent them from hitting BUG() in vmalloc.c --RR */
- if ((SMP_ALIGN(size) >> PAGE_SHIFT) + 2 > totalram_pages)
+ if ((size >> PAGE_SHIFT) + 2 > totalram_pages)
return NULL;
info = kvmalloc(sz, GFP_KERNEL);
Patches currently in stable-queue which might be from dvyukov(a)google.com are
queue-4.14/kvm-x86-check-input-paging-mode-when-cs.l-is-set.patch
queue-4.14/kvm-x86-fix-escape-of-guest-dr6-to-the-host.patch
queue-4.14/blktrace-fix-unlocked-registration-of-tracepoints.patch
queue-4.14/netfilter-x_tables-fix-int-overflow-in-xt_alloc_table_info.patch
queue-4.14/netfilter-ipt_clusterip-fix-out-of-bounds-accesses-in-clusterip_tg_check.patch
queue-4.14/kcov-detect-double-association-with-a-single-task.patch