The ADC in the JZ4740 can work either in high-precision mode with a 2.5V
range, or in low-precision mode with a 7.5V range. The code in place in
this driver will select the proper scale according to the maximum
voltage of the battery.
The JZ4770 however only has one mode, with a 6.6V range. If only one
scale is available, there's no need to change it (and nothing to change
it to), and trying to do so will fail with -EINVAL.
Fixes commit fb24ccfbe1e0 ("power: supply: add Ingenic JZ47xx battery
driver.")
Signed-off-by: Paul Cercueil <paul(a)crapouillou.net>
Cc: stable(a)vger.kernel.org
---
drivers/power/supply/ingenic-battery.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/power/supply/ingenic-battery.c b/drivers/power/supply/ingenic-battery.c
index 35816d4b3012..5a53057b4f64 100644
--- a/drivers/power/supply/ingenic-battery.c
+++ b/drivers/power/supply/ingenic-battery.c
@@ -80,6 +80,10 @@ static int ingenic_battery_set_scale(struct ingenic_battery *bat)
if (ret != IIO_AVAIL_LIST || scale_type != IIO_VAL_FRACTIONAL_LOG2)
return -EINVAL;
+ /* Only one (fractional) entry - nothing to change */
+ if (scale_len == 2)
+ return 0;
+
max_mV = bat->info.voltage_max_design_uv / 1000;
for (i = 0; i < scale_len; i += 2) {
--
2.21.0.593.g511ec345e18
Hello,
This series backports arm64 spectre patches to v4.4 stable kernel. I
have started this backport with Mark Rutland's backport of Spectre to
4.9 [1] and tried applying the upstream version of them over 4.4 and
resolved conflicts by checking how they have been resolved in 4.9.
The KVM changes are mostly dropped as the KVM code in v4.4 is quite
different and it makes backport more complex. This was suggested by the
ARM team.
I had to pick few extra upstream patches to avoid conflicts and to make
things work:
mm/kasan: add API to check memory regions
arm64: kasan: instrument user memory access API
arm64: cpufeature: Add scope for capability check
arm64: cputype info for Broadcom Vulcan
arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
ARM: 8478/2: arm/arm64: add arm-smccc
arm64: cpufeature: Test 'matches' pointer to find the end of the list
arm64: Introduce cpu_die_early
arm64: Move cpu_die_early to smp.c
arm64: Verify CPU errata work arounds on hotplugged CPU
arm64: errata: Calling enable functions for CPU errata too
arm64: Rearrange CPU errata workaround checks
I also had to drop few patches as they weren't getting applied properly
due to missing files/features or they were KVM related:
arm64: cpufeature: __this_cpu_has_cap() shouldn't stop early
arm64: KVM: Use per-CPU vector when BP hardening is enabled
arm64: KVM: Make PSCI_VERSION a fast path
mm: Introduce lm_alias
arm64: KVM: Increment PC after handling an SMC trap
arm/arm64: KVM: Consolidate the PSCI include files
arm/arm64: KVM: Add PSCI_VERSION helper
arm/arm64: KVM: Add smccc accessors to PSCI code
arm/arm64: KVM: Implement PSCI 1.0 support
arm/arm64: KVM: Turn kvm_psci_version into a static inline
arm64: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
I have dropped arch/arm64/crypto/sha256-core.S and sha512-core.S files
as they weren't part of the upstream commit. Not sure why it was
included by Mark as the commit log doesn't provide any reasoning for it.
The patches in this series are pushed here [2].
This is tested on Hikey board (octa A53) and I verified that BP
hardening code is getting hit for CPUs (had to hack a bit and enable
BP hardening support for A53 for this).
Changes V1->V2:
- Rebased over 4.4.184 (was 4.4.180 earlier).
- Fixed an build issue with CONFIG_KASAN (Julien).
- Dropped few patches, mostly KVM stuff (Julien):
arm64: remove duplicate macro __KERNEL__ check
mm: Introduce lm_alias
arm64: KVM: Increment PC after handling an SMC trap
arm/arm64: KVM: Consolidate the PSCI include files
arm/arm64: KVM: Add PSCI_VERSION helper
arm/arm64: KVM: Add smccc accessors to PSCI code
arm/arm64: KVM: Implement PSCI 1.0 support
arm/arm64: KVM: Turn kvm_psci_version into a static inline
arm64: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
- Added few patches to fix issues reported by Julien:
arm64: cpufeature: Test 'matches' pointer to find the end of the list
arm64: Introduce cpu_die_early
arm64: Move cpu_die_early to smp.c
arm64: Verify CPU errata work arounds on hotplugged CPU
arm64: errata: Calling enable functions for CPU errata too
arm64: Rearrange CPU errata workaround checks
--
viresh
[1] https://patches.linaro.org/cover/133195/ with top commit in 4.9 stable tree:
a3b292fe0560 arm64: futex: Mask __user pointers prior to dereference
[2] https://git.kernel.org/pub/scm/linux/kernel/git/vireshk/linux.git stable/v4.4.y/spectre
-------------------------8<-------------------------
Andre Przywara (1):
arm64: errata: Calling enable functions for CPU errata too
Andrey Ryabinin (1):
mm/kasan: add API to check memory regions
Catalin Marinas (1):
arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm
macro
James Morse (1):
arm64: cpufeature: Test 'matches' pointer to find the end of the list
Jayachandran C (3):
arm64: cputype info for Broadcom Vulcan
arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
arm64: Branch predictor hardening for Cavium ThunderX2
Jens Wiklander (1):
ARM: 8478/2: arm/arm64: add arm-smccc
Marc Zyngier (11):
arm64: Move post_ttbr_update_workaround to C code
arm64: Move BP hardening to check_and_switch_context
arm64: cpu_errata: Allow an erratum to be match for all revisions of a
core
arm/arm64: KVM: Advertise SMCCC v1.1
arm64: KVM: Report SMCCC_ARCH_WORKAROUND_1 BP hardening support
firmware/psci: Expose PSCI conduit
firmware/psci: Expose SMCCC version through psci_ops
arm/arm64: smccc: Make function identifiers an unsigned quantity
arm/arm64: smccc: Implement SMCCC v1.1 inline primitive
arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support
arm64: Kill PSCI_GET_VERSION as a variant-2 workaround
Robin Murphy (3):
arm64: Implement array_index_mask_nospec()
arm64: Make USER_DS an inclusive limit
arm64: Use pointer masking to limit uaccess speculation
Suzuki K Poulose (6):
arm64: cpufeature: Add scope for capability check
arm64: Introduce cpu_die_early
arm64: Move cpu_die_early to smp.c
arm64: Verify CPU errata work arounds on hotplugged CPU
arm64: Rearrange CPU errata workaround checks
arm64: Run enable method for errata work arounds on late CPUs
Will Deacon (13):
arm64: barrier: Add CSDB macros to control data-value prediction
arm64: entry: Ensure branch through syscall table is bounded under
speculation
arm64: uaccess: Prevent speculative use of the current addr_limit
arm64: uaccess: Don't bother eliding access_ok checks in __{get,
put}_user
arm64: uaccess: Mask __user pointers for __arch_{clear, copy_*}_user
arm64: cpufeature: Pass capability structure to ->enable callback
drivers/firmware: Expose psci_get_version through psci_ops structure
arm64: Add skeleton to harden the branch predictor against aliasing
attacks
arm64: entry: Apply BP hardening for high-priority synchronous
exceptions
arm64: entry: Apply BP hardening for suspicious interrupts from EL0
arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75
arm64: Implement branch predictor hardening for affected Cortex-A CPUs
arm64: futex: Mask __user pointers prior to dereference
Yang Shi (1):
arm64: kasan: instrument user memory access API
Yury Norov (1):
arm64: move TASK_* definitions to <asm/processor.h>
MAINTAINERS | 14 ++
arch/arm64/Kconfig | 17 ++
arch/arm64/include/asm/assembler.h | 18 ++
arch/arm64/include/asm/barrier.h | 23 +++
arch/arm64/include/asm/cpufeature.h | 24 ++-
arch/arm64/include/asm/cputype.h | 12 ++
arch/arm64/include/asm/futex.h | 9 +-
arch/arm64/include/asm/memory.h | 15 --
arch/arm64/include/asm/mmu.h | 39 ++++
arch/arm64/include/asm/processor.h | 24 +++
arch/arm64/include/asm/smp.h | 1 +
arch/arm64/include/asm/sysreg.h | 2 +
arch/arm64/include/asm/uaccess.h | 175 ++++++++++++------
arch/arm64/kernel/Makefile | 5 +
arch/arm64/kernel/arm64ksyms.c | 8 +-
arch/arm64/kernel/bpi.S | 75 ++++++++
arch/arm64/kernel/cpu_errata.c | 213 +++++++++++++++++++++-
arch/arm64/kernel/cpufeature.c | 186 +++++++++----------
arch/arm64/kernel/cpuinfo.c | 2 -
arch/arm64/kernel/entry.S | 26 ++-
arch/arm64/kernel/smp.c | 33 +++-
arch/arm64/lib/clear_user.S | 6 +-
arch/arm64/lib/copy_from_user.S | 4 +-
arch/arm64/lib/copy_in_user.S | 4 +-
arch/arm64/lib/copy_to_user.S | 4 +-
arch/arm64/mm/context.c | 12 ++
arch/arm64/mm/fault.c | 31 ++++
arch/arm64/mm/proc.S | 12 +-
drivers/firmware/Kconfig | 3 +
drivers/firmware/psci.c | 58 +++++-
include/linux/arm-smccc.h | 267 ++++++++++++++++++++++++++++
include/linux/kasan-checks.h | 12 ++
include/linux/psci.h | 14 ++
mm/kasan/kasan.c | 12 ++
34 files changed, 1147 insertions(+), 213 deletions(-)
create mode 100644 arch/arm64/kernel/bpi.S
create mode 100644 include/linux/arm-smccc.h
create mode 100644 include/linux/kasan-checks.h
--
2.21.0.rc0.269.g1a574e7a288b
The function posix_acl_create() applies the umask only if the inode
has no ACL (= NULL) or if ACLs are not supported by the filesystem
driver (= -EOPNOTSUPP).
However, this happens only after after the IS_POSIXACL() check
succeeeded. If the superblock doesn't enable ACL support, umask will
never be applied. A filesystem which has no ACL support will of
course not enable SB_POSIXACL, rendering the umask-applying code path
unreachable.
This fixes a bug which causes the umask to be ignored with O_TMPFILE
on tmpfs:
https://github.com/MusicPlayerDaemon/MPD/issues/558https://bugs.gentoo.org/show_bug.cgi?id=686142#c3https://bugzilla.kernel.org/show_bug.cgi?id=203625
Signed-off-by: Max Kellermann <mk(a)cm4all.com>
Cc: stable(a)vger.kernel.org
---
fs/posix_acl.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/fs/posix_acl.c b/fs/posix_acl.c
index 84ad1c90d535..4071c66f234a 100644
--- a/fs/posix_acl.c
+++ b/fs/posix_acl.c
@@ -589,9 +589,14 @@ posix_acl_create(struct inode *dir, umode_t *mode,
*acl = NULL;
*default_acl = NULL;
- if (S_ISLNK(*mode) || !IS_POSIXACL(dir))
+ if (S_ISLNK(*mode))
return 0;
+ if (!IS_POSIXACL(dir)) {
+ *mode &= ~current_umask();
+ return 0;
+ }
+
p = get_acl(dir, ACL_TYPE_DEFAULT);
if (!p || p == ERR_PTR(-EOPNOTSUPP)) {
*mode &= ~current_umask();
--
2.20.1
Sasha,
you merged my last set of XFS fixes. I asked for one patch to not be
merged yet as one issue was not yet properly fixed. After some further
review I have identified commits which do fix the kernel crash reported
on kz#204223 [0] with generic/388, this patch set applies on top of the
last one I sent you.
These commits do quite a bit of code refactoring, and the actual fix
lies hidden in the last commit by Darrick. Due to the amount of changes
trying to extract the fix is riskier than just carring the code
refactoring. If we're OK with the code refactor for stable, its my
recommendation we keep the changes to match more with upstream and
benefit from other fixes. The code refactoring was merged on v4.20 and
Darrick's fix is the only fix upstream since the code was merged.
If others disagree with this approach please speak up.
I've run a full set of fstests against the following sections 12 times and
have found no regressions against the baseline:
xfs
xfs_logdev
xfs_nocrc_512
xfs_nocrc
xfs_realtimedev
xfs_reflink_1024
xfs_reflink_dev
Review from others is appreciated.
[0] https://bugzilla.kernel.org/show_bug.cgi?id=204223
Allison Henderson (4):
xfs: Move fs/xfs/xfs_attr.h to fs/xfs/libxfs/xfs_attr.h
xfs: Add helper function xfs_attr_try_sf_addname
xfs: Add attibute set and helper functions
xfs: Add attibute remove and helper functions
Brian Foster (1):
xfs: don't trip over uninitialized buffer on extent read of corrupted
inode
Darrick J. Wong (1):
xfs: always rejoin held resources during defer roll
fs/xfs/libxfs/xfs_attr.c | 231 ++++++++++++++++++---------------
fs/xfs/{ => libxfs}/xfs_attr.h | 2 +
fs/xfs/libxfs/xfs_bmap.c | 54 +++++---
fs/xfs/libxfs/xfs_bmap.h | 1 +
fs/xfs/libxfs/xfs_defer.c | 14 +-
fs/xfs/xfs_dquot.c | 17 +--
6 files changed, 183 insertions(+), 136 deletions(-)
rename fs/xfs/{ => libxfs}/xfs_attr.h (98%)
--
2.18.0
pnv_tce() returns a pointer to a TCE entry and originally a TCE table
would be pre-allocated. For the default case of 2GB window the table
needs only a single level and that is fine. However if more levels are
requested, it is possible to get a race when 2 threads want a pointer
to a TCE entry from the same page of TCEs.
This adds cmpxchg to handle the race. Note that once TCE is non-zero,
it cannot become zero again.
CC: stable(a)vger.kernel.org # v4.19+
Fixes: a68bd1267b72 ("powerpc/powernv/ioda: Allocate indirect TCE levels on demand")
Signed-off-by: Alexey Kardashevskiy <aik(a)ozlabs.ru>
---
The race occurs about 30 times in the first 3 minutes of copying files
via rsync and that's about it.
This fixes EEH's from
https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=110810
---
Changes:
v2:
* replaced spin_lock with cmpxchg+readonce
---
arch/powerpc/platforms/powernv/pci-ioda-tce.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
index e28f03e1eb5e..8d6569590161 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
@@ -48,6 +48,9 @@ static __be64 *pnv_alloc_tce_level(int nid, unsigned int shift)
return addr;
}
+static void pnv_pci_ioda2_table_do_free_pages(__be64 *addr,
+ unsigned long size, unsigned int levels);
+
static __be64 *pnv_tce(struct iommu_table *tbl, bool user, long idx, bool alloc)
{
__be64 *tmp = user ? tbl->it_userspace : (__be64 *) tbl->it_base;
@@ -57,9 +60,9 @@ static __be64 *pnv_tce(struct iommu_table *tbl, bool user, long idx, bool alloc)
while (level) {
int n = (idx & mask) >> (level * shift);
- unsigned long tce;
+ unsigned long oldtce, tce = be64_to_cpu(READ_ONCE(tmp[n]));
- if (tmp[n] == 0) {
+ if (!tce) {
__be64 *tmp2;
if (!alloc)
@@ -70,10 +73,15 @@ static __be64 *pnv_tce(struct iommu_table *tbl, bool user, long idx, bool alloc)
if (!tmp2)
return NULL;
- tmp[n] = cpu_to_be64(__pa(tmp2) |
- TCE_PCI_READ | TCE_PCI_WRITE);
+ tce = __pa(tmp2) | TCE_PCI_READ | TCE_PCI_WRITE;
+ oldtce = be64_to_cpu(cmpxchg(&tmp[n], 0,
+ cpu_to_be64(tce)));
+ if (oldtce) {
+ pnv_pci_ioda2_table_do_free_pages(tmp2,
+ ilog2(tbl->it_level_size) + 3, 1);
+ tce = oldtce;
+ }
}
- tce = be64_to_cpu(tmp[n]);
tmp = __va(tce & ~(TCE_PCI_READ | TCE_PCI_WRITE));
idx &= ~mask;
--
2.17.1
Fix a regression introduced by upstream commit fee109901f39
('signal/drbd: Use send_sig not force_sig').
Currently, when a thread is initialized, all signals are set to be
ignored by default. DRBD uses SIGHUP to end its threads, which means it
is now no longer possible to bring down a DRBD resource because the
signals do not make it through to the thread in question.
This circumstance was previously hidden by the fact that DRBD used
force_sig() to kill its threads. The aforementioned upstream commit
changed this to send_sig(), which means the effects of the signals being
ignored by default are now becoming visible.
Thus, issue an allow_signal() at the start of the thread to explicitly
allow the desired signals.
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder(a)linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner(a)linbit.com>
Fixes: fee109901f39 ("signal/drbd: Use send_sig not force_sig")
Cc: stable(a)vger.kernel.org
---
drivers/block/drbd/drbd_main.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 9bd4ddd12b25..b8b986df6814 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -318,6 +318,9 @@ static int drbd_thread_setup(void *arg)
unsigned long flags;
int retval;
+ allow_signal(DRBD_SIGKILL);
+ allow_signal(SIGXCPU);
+
snprintf(current->comm, sizeof(current->comm), "drbd_%c_%s",
thi->name[0],
resource->name);
--
2.22.0
For various reasons, at least with x86 EFI firmwares, the xoffset and
yoffset in the BGRT info are not always reliable.
Extensive testing has shown that when the info is correct, the
BGRT image is always exactly centered horizontally (the yoffset variable
is more variable and not always predictable).
This commit simplifies / improves the bgrt_sanity_check to simply
check that the BGRT image is exactly centered horizontally and skips
(re)drawing it when it is not.
This fixes the BGRT image sometimes being drawn in the wrong place.
Cc: stable(a)vger.kernel.org
Fixes: 88fe4ceb2447 ("efifb: BGRT: Do not copy the boot graphics for non native resolutions")
Signed-off-by: Hans de Goede <hdegoede(a)redhat.com>
---
drivers/video/fbdev/efifb.c | 27 ++++++---------------------
1 file changed, 6 insertions(+), 21 deletions(-)
diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
index dfa8dd47d19d..5b3cef9bf794 100644
--- a/drivers/video/fbdev/efifb.c
+++ b/drivers/video/fbdev/efifb.c
@@ -122,28 +122,13 @@ static void efifb_copy_bmp(u8 *src, u32 *dst, int width, struct screen_info *si)
*/
static bool efifb_bgrt_sanity_check(struct screen_info *si, u32 bmp_width)
{
- static const int default_resolutions[][2] = {
- { 800, 600 },
- { 1024, 768 },
- { 1280, 1024 },
- };
- u32 i, right_margin;
-
- for (i = 0; i < ARRAY_SIZE(default_resolutions); i++) {
- if (default_resolutions[i][0] == si->lfb_width &&
- default_resolutions[i][1] == si->lfb_height)
- break;
- }
- /* If not a default resolution used for textmode, this should be fine */
- if (i >= ARRAY_SIZE(default_resolutions))
- return true;
-
- /* If the right margin is 5 times smaller then the left one, reject */
- right_margin = si->lfb_width - (bgrt_tab.image_offset_x + bmp_width);
- if (right_margin < (bgrt_tab.image_offset_x / 5))
- return false;
+ /*
+ * All x86 firmwares horizontally center the image (the yoffset
+ * calculations differ between boards, but xoffset is predictable).
+ */
+ u32 expected_xoffset = (si->lfb_width - bmp_width) / 2;
- return true;
+ return bgrt_tab.image_offset_x == expected_xoffset;
}
#else
static bool efifb_bgrt_sanity_check(struct screen_info *si, u32 bmp_width)
--
2.21.0