Hello,
after upgrading our nfs-server from 4.9.75 to 4.9.78 group permissions stop
working (for clients). If you need group permissions to access a file or
directory, sometimes access is granted, but rather often denied. Often access
to the same object is denied within seconds after access was granted in an
earlier access. user permissions work fine.
Downgrading to 4.9.75 fixes the issue.
We use kerberos.
Regards,
--
Wolfgang Walter
Studentenwerk München
Anstalt des öffentlichen Rechts
This is a note to let you know that I've just added the patch titled
nfsd: auth: Fix gid sorting when rootsquash enabled
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
nfsd-auth-fix-gid-sorting-when-rootsquash-enabled.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 1995266727fa8143897e89b55f5d3c79aa828420 Mon Sep 17 00:00:00 2001
From: Ben Hutchings <ben.hutchings(a)codethink.co.uk>
Date: Mon, 22 Jan 2018 20:11:06 +0000
Subject: nfsd: auth: Fix gid sorting when rootsquash enabled
From: Ben Hutchings <ben.hutchings(a)codethink.co.uk>
commit 1995266727fa8143897e89b55f5d3c79aa828420 upstream.
Commit bdcf0a423ea1 ("kernel: make groups_sort calling a responsibility
group_info allocators") appears to break nfsd rootsquash in a pretty
major way.
It adds a call to groups_sort() inside the loop that copies/squashes
gids, which means the valid gids are sorted along with the following
garbage. The net result is that the highest numbered valid gids are
replaced with any lower-valued garbage gids, possibly including 0.
We should sort only once, after filling in all the gids.
Fixes: bdcf0a423ea1 ("kernel: make groups_sort calling a responsibility ...")
Signed-off-by: Ben Hutchings <ben.hutchings(a)codethink.co.uk>
Acked-by: J. Bruce Fields <bfields(a)redhat.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Wolfgang Walter <linux(a)stwm.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
fs/nfsd/auth.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--- a/fs/nfsd/auth.c
+++ b/fs/nfsd/auth.c
@@ -60,9 +60,10 @@ int nfsd_setuser(struct svc_rqst *rqstp,
else
GROUP_AT(gi, i) = GROUP_AT(rqgi, i);
- /* Each thread allocates its own gi, no race */
- groups_sort(gi);
}
+
+ /* Each thread allocates its own gi, no race */
+ groups_sort(gi);
} else {
gi = get_group_info(rqgi);
}
Patches currently in stable-queue which might be from ben.hutchings(a)codethink.co.uk are
queue-4.4/vsyscall-fix-permissions-for-emulate-mode-with-kaiser-pti.patch
queue-4.4/ipv6-fix-getsockopt-for-sockets-with-default-ipv6_autoflowlabel.patch
queue-4.4/x86-microcode-intel-fix-bdw-late-loading-revision-check.patch
queue-4.4/nfsd-auth-fix-gid-sorting-when-rootsquash-enabled.patch
This is a note to let you know that I've just added the patch titled
nfsd: auth: Fix gid sorting when rootsquash enabled
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
nfsd-auth-fix-gid-sorting-when-rootsquash-enabled.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 1995266727fa8143897e89b55f5d3c79aa828420 Mon Sep 17 00:00:00 2001
From: Ben Hutchings <ben.hutchings(a)codethink.co.uk>
Date: Mon, 22 Jan 2018 20:11:06 +0000
Subject: nfsd: auth: Fix gid sorting when rootsquash enabled
From: Ben Hutchings <ben.hutchings(a)codethink.co.uk>
commit 1995266727fa8143897e89b55f5d3c79aa828420 upstream.
Commit bdcf0a423ea1 ("kernel: make groups_sort calling a responsibility
group_info allocators") appears to break nfsd rootsquash in a pretty
major way.
It adds a call to groups_sort() inside the loop that copies/squashes
gids, which means the valid gids are sorted along with the following
garbage. The net result is that the highest numbered valid gids are
replaced with any lower-valued garbage gids, possibly including 0.
We should sort only once, after filling in all the gids.
Fixes: bdcf0a423ea1 ("kernel: make groups_sort calling a responsibility ...")
Signed-off-by: Ben Hutchings <ben.hutchings(a)codethink.co.uk>
Acked-by: J. Bruce Fields <bfields(a)redhat.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Wolfgang Walter <linux(a)stwm.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
fs/nfsd/auth.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/fs/nfsd/auth.c
+++ b/fs/nfsd/auth.c
@@ -60,10 +60,10 @@ int nfsd_setuser(struct svc_rqst *rqstp,
gi->gid[i] = exp->ex_anon_gid;
else
gi->gid[i] = rqgi->gid[i];
-
- /* Each thread allocates its own gi, no race */
- groups_sort(gi);
}
+
+ /* Each thread allocates its own gi, no race */
+ groups_sort(gi);
} else {
gi = get_group_info(rqgi);
}
Patches currently in stable-queue which might be from ben.hutchings(a)codethink.co.uk are
queue-4.14/ipv6-fix-getsockopt-for-sockets-with-default-ipv6_autoflowlabel.patch
queue-4.14/nfsd-auth-fix-gid-sorting-when-rootsquash-enabled.patch
This is a note to let you know that I've just added the patch titled
cpufreq: governor: Ensure sufficiently large sampling intervals
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
cpufreq-governor-ensure-sufficiently-large-sampling-intervals.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 56026645e2b6f11ede34a5e6ab69d3eb56f9c8fc Mon Sep 17 00:00:00 2001
From: "Rafael J. Wysocki" <rafael.j.wysocki(a)intel.com>
Date: Mon, 18 Dec 2017 02:15:32 +0100
Subject: cpufreq: governor: Ensure sufficiently large sampling intervals
From: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
commit 56026645e2b6f11ede34a5e6ab69d3eb56f9c8fc upstream.
After commit aa7519af450d (cpufreq: Use transition_delay_us for legacy
governors as well) the sampling_rate field of struct dbs_data may be
less than the tick period which causes dbs_update() to produce
incorrect results, so make the code ensure that the value of that
field will always be sufficiently large.
Fixes: aa7519af450d (cpufreq: Use transition_delay_us for legacy governors as well)
Reported-by: Andy Tang <andy.tang(a)nxp.com>
Reported-by: Doug Smythies <dsmythies(a)telus.net>
Tested-by: Andy Tang <andy.tang(a)nxp.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Acked-by: Viresh Kumar <viresh.kumar(a)linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/cpufreq/cpufreq_governor.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -22,6 +22,8 @@
#include "cpufreq_governor.h"
+#define CPUFREQ_DBS_MIN_SAMPLING_INTERVAL (2 * TICK_NSEC / NSEC_PER_USEC)
+
static DEFINE_PER_CPU(struct cpu_dbs_info, cpu_dbs);
static DEFINE_MUTEX(gov_dbs_data_mutex);
@@ -47,11 +49,15 @@ ssize_t store_sampling_rate(struct gov_a
{
struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct policy_dbs_info *policy_dbs;
+ unsigned int sampling_interval;
int ret;
- ret = sscanf(buf, "%u", &dbs_data->sampling_rate);
- if (ret != 1)
+
+ ret = sscanf(buf, "%u", &sampling_interval);
+ if (ret != 1 || sampling_interval < CPUFREQ_DBS_MIN_SAMPLING_INTERVAL)
return -EINVAL;
+ dbs_data->sampling_rate = sampling_interval;
+
/*
* We are operating under dbs_data->mutex and so the list and its
* entries can't be freed concurrently.
@@ -430,7 +436,14 @@ int cpufreq_dbs_governor_init(struct cpu
if (ret)
goto free_policy_dbs_info;
- dbs_data->sampling_rate = cpufreq_policy_transition_delay_us(policy);
+ /*
+ * The sampling interval should not be less than the transition latency
+ * of the CPU and it also cannot be too small for dbs_update() to work
+ * correctly.
+ */
+ dbs_data->sampling_rate = max_t(unsigned int,
+ CPUFREQ_DBS_MIN_SAMPLING_INTERVAL,
+ cpufreq_policy_transition_delay_us(policy));
if (!have_governor_per_policy())
gov->gdbs_data = dbs_data;
Patches currently in stable-queue which might be from rafael.j.wysocki(a)intel.com are
queue-4.14/cpufreq-governor-ensure-sufficiently-large-sampling-intervals.patch
On 01/24/2018 11:07 AM, David Woodhouse wrote:
> On Tue, 2018-01-09 at 22:39 +0100, Daniel Borkmann wrote:
>> On 01/09/2018 07:04 PM, Alexei Starovoitov wrote:
>>>
>>> The BPF interpreter has been used as part of the spectre 2 attack CVE-2017-5715.
>>>
>>> A quote from goolge project zero blog:
>>> "At this point, it would normally be necessary to locate gadgets in
>>> the host kernel code that can be used to actually leak data by reading
>>> from an attacker-controlled location, shifting and masking the result
>>> appropriately and then using the result of that as offset to an
>>> attacker-controlled address for a load. But piecing gadgets together
>>> and figuring out which ones work in a speculation context seems annoying.
>>> So instead, we decided to use the eBPF interpreter, which is built into
>>> the host kernel - while there is no legitimate way to invoke it from inside
>>> a VM, the presence of the code in the host kernel's text section is sufficient
>>> to make it usable for the attack, just like with ordinary ROP gadgets."
>>>
>>> To make attacker job harder introduce BPF_JIT_ALWAYS_ON config
>>> option that removes interpreter from the kernel in favor of JIT-only mode.
>>> So far eBPF JIT is supported by:
>>> x64, arm64, arm32, sparc64, s390, powerpc64, mips64
>>>
>>> The start of JITed program is randomized and code page is marked as read-only.
>>> In addition "constant blinding" can be turned on with net.core.bpf_jit_harden
>>>
>>> v2->v3:
>>> - move __bpf_prog_ret0 under ifdef (Daniel)
>>>
>>> v1->v2:
>>> - fix init order, test_bpf and cBPF (Daniel's feedback)
>>> - fix offloaded bpf (Jakub's feedback)
>>> - add 'return 0' dummy in case something can invoke prog->bpf_func
>>> - retarget bpf tree. For bpf-next the patch would need one extra hunk.
>>> It will be sent when the trees are merged back to net-next
>>>
>>> Considered doing:
>>> int bpf_jit_enable __read_mostly = BPF_EBPF_JIT_DEFAULT;
>>> but it seems better to land the patch as-is and in bpf-next remove
>>> bpf_jit_enable global variable from all JITs, consolidate in one place
>>> and remove this jit_init() function.
>>>
>>> Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
>>
>> Applied to bpf tree, thanks Alexei!
>
> For stable too?
Yes, this will go into stable as well; batch of backports will come Thurs/Fri.