From: Chester Lin clin@suse.com
[ Upstream commit 1d31999cf04c21709f72ceb17e65b54a401330da ]
adjust_lowmem_bounds() checks every memblocks in order to find the boundary between lowmem and highmem. However some memblocks could be marked as NOMAP so they are not used by kernel, which should be skipped while calculating the boundary.
Signed-off-by: Chester Lin clin@suse.com Reviewed-by: Mike Rapoport rppt@linux.ibm.com Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: Lee Jones lee.jones@linaro.org --- arch/arm/mm/mmu.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 241bf898adf5..7edc6c3f4bd9 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1188,6 +1188,9 @@ void __init adjust_lowmem_bounds(void) phys_addr_t block_start = reg->base; phys_addr_t block_end = reg->base + reg->size;
+ if (memblock_is_nomap(reg)) + continue; + if (reg->base < vmalloc_limit) { if (block_end > lowmem_limit) /*
From: Marc Kleine-Budde mkl@pengutronix.de
[ Upstream commit d36673f5918c8fd3533f7c0d4bac041baf39c7bb ]
This patch remove the return at the end of the void function can_dellink().
Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Lee Jones lee.jones@linaro.org --- drivers/net/can/dev.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c index ffc5467a1ec2..1b3d7ec3462c 100644 --- a/drivers/net/can/dev.c +++ b/drivers/net/can/dev.c @@ -1071,7 +1071,6 @@ static int can_newlink(struct net *src_net, struct net_device *dev,
static void can_dellink(struct net_device *dev, struct list_head *head) { - return; }
static struct rtnl_link_ops can_link_ops __read_mostly = {
From: Hari Vyas hari.vyas@broadcom.com
[ Upstream commit e4ba15debcfd27f60d43da940a58108783bff2a6 ]
The bad_mode() handler is called if we encounter an uunknown exception, with the expectation that the subsequent call to panic() will halt the system. Unfortunately, if the exception calling bad_mode() is taken from EL0, then the call to die() can end up killing the current user task and calling schedule() instead of falling through to panic().
Remove the die() call altogether, since we really want to bring down the machine in this "impossible" case.
Signed-off-by: Hari Vyas hari.vyas@broadcom.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Lee Jones lee.jones@linaro.org --- arch/arm64/kernel/traps.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 28bef94cf792..5962badb3346 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -611,7 +611,6 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr) handler[reason], smp_processor_id(), esr, esr_get_class_string(esr));
- die("Oops - bad mode", regs, 0); local_irq_disable(); panic("bad mode"); }
On Fri, Nov 22, 2019 at 10:52:48AM +0000, Lee Jones wrote:
From: Hari Vyas hari.vyas@broadcom.com
[ Upstream commit e4ba15debcfd27f60d43da940a58108783bff2a6 ]
The bad_mode() handler is called if we encounter an uunknown exception, with the expectation that the subsequent call to panic() will halt the system. Unfortunately, if the exception calling bad_mode() is taken from EL0, then the call to die() can end up killing the current user task and calling schedule() instead of falling through to panic().
Remove the die() call altogether, since we really want to bring down the machine in this "impossible" case.
Should this be in newer LTS kernels too? I don't see it in 4.14. We can't take anything into older kernels if it's not in newer ones - we don't want to break users who update their kernels.
On Mon, 25 Nov 2019, Sasha Levin wrote:
On Fri, Nov 22, 2019 at 10:52:48AM +0000, Lee Jones wrote:
From: Hari Vyas hari.vyas@broadcom.com
[ Upstream commit e4ba15debcfd27f60d43da940a58108783bff2a6 ]
The bad_mode() handler is called if we encounter an uunknown exception, with the expectation that the subsequent call to panic() will halt the system. Unfortunately, if the exception calling bad_mode() is taken from EL0, then the call to die() can end up killing the current user task and calling schedule() instead of falling through to panic().
Remove the die() call altogether, since we really want to bring down the machine in this "impossible" case.
Should this be in newer LTS kernels too? I don't see it in 4.14. We can't take anything into older kernels if it's not in newer ones - we don't want to break users who update their kernels.
Only; 3.18, 4.4, 4.9 and 5.3 were studied.
I can look at others if it helps.
On Mon, Nov 25, 2019 at 02:44:29PM +0000, Lee Jones wrote:
On Mon, 25 Nov 2019, Sasha Levin wrote:
On Fri, Nov 22, 2019 at 10:52:48AM +0000, Lee Jones wrote:
From: Hari Vyas hari.vyas@broadcom.com
[ Upstream commit e4ba15debcfd27f60d43da940a58108783bff2a6 ]
The bad_mode() handler is called if we encounter an uunknown exception, with the expectation that the subsequent call to panic() will halt the system. Unfortunately, if the exception calling bad_mode() is taken from EL0, then the call to die() can end up killing the current user task and calling schedule() instead of falling through to panic().
Remove the die() call altogether, since we really want to bring down the machine in this "impossible" case.
Should this be in newer LTS kernels too? I don't see it in 4.14. We can't take anything into older kernels if it's not in newer ones - we don't want to break users who update their kernels.
Only; 3.18, 4.4, 4.9 and 5.3 were studied.
I can look at others if it helps.
You have to look at others, we can't have regressions if people move from one LTS to a newer one.
thanks,
greg k-h
On Mon, 25 Nov 2019, Greg KH wrote:
On Mon, Nov 25, 2019 at 02:44:29PM +0000, Lee Jones wrote:
On Mon, 25 Nov 2019, Sasha Levin wrote:
On Fri, Nov 22, 2019 at 10:52:48AM +0000, Lee Jones wrote:
From: Hari Vyas hari.vyas@broadcom.com
[ Upstream commit e4ba15debcfd27f60d43da940a58108783bff2a6 ]
The bad_mode() handler is called if we encounter an uunknown exception, with the expectation that the subsequent call to panic() will halt the system. Unfortunately, if the exception calling bad_mode() is taken from EL0, then the call to die() can end up killing the current user task and calling schedule() instead of falling through to panic().
Remove the die() call altogether, since we really want to bring down the machine in this "impossible" case.
Should this be in newer LTS kernels too? I don't see it in 4.14. We can't take anything into older kernels if it's not in newer ones - we don't want to break users who update their kernels.
Only; 3.18, 4.4, 4.9 and 5.3 were studied.
I can look at others if it helps.
You have to look at others, we can't have regressions if people move from one LTS to a newer one.
Sure, I understand. Will do from now on.
On Mon, 25 Nov 2019, Lee Jones wrote:
On Mon, 25 Nov 2019, Greg KH wrote:
On Mon, Nov 25, 2019 at 02:44:29PM +0000, Lee Jones wrote:
On Mon, 25 Nov 2019, Sasha Levin wrote:
On Fri, Nov 22, 2019 at 10:52:48AM +0000, Lee Jones wrote:
From: Hari Vyas hari.vyas@broadcom.com
[ Upstream commit e4ba15debcfd27f60d43da940a58108783bff2a6 ]
The bad_mode() handler is called if we encounter an uunknown exception, with the expectation that the subsequent call to panic() will halt the system. Unfortunately, if the exception calling bad_mode() is taken from EL0, then the call to die() can end up killing the current user task and calling schedule() instead of falling through to panic().
Remove the die() call altogether, since we really want to bring down the machine in this "impossible" case.
Should this be in newer LTS kernels too? I don't see it in 4.14. We can't take anything into older kernels if it's not in newer ones - we don't want to break users who update their kernels.
Only; 3.18, 4.4, 4.9 and 5.3 were studied.
I can look at others if it helps.
You have to look at others, we can't have regressions if people move from one LTS to a newer one.
Okay, now sent appropriate patches to linux-4.14.y and linux-4.19.y.
From: Bo Yan byan@nvidia.com
[ Upstream commit 703cbaa601ff3fb554d1246c336ba727cc083ea0 ]
cpufreq_resume can be called even without preceding cpufreq_suspend. This can happen in following scenario:
suspend_devices_and_enter --> dpm_suspend_start --> dpm_prepare --> device_prepare : this function errors out --> dpm_suspend: this is skipped due to dpm_prepare failure this means cpufreq_suspend is skipped over --> goto Recover_platform, due to previous error --> goto Resume_devices --> dpm_resume_end --> dpm_resume --> cpufreq_resume
In case schedutil is used as frequency governor, cpufreq_resume will eventually call sugov_start, which does following:
memset(sg_cpu, 0, sizeof(*sg_cpu)); ....
This effectively erases function pointer for frequency update, causing crash later on. The function pointer would have been set correctly if subsequent cpufreq_add_update_util_hook runs successfully, but that function returns earlier because cpufreq_suspend was not called:
if (WARN_ON(per_cpu(cpufreq_update_util_data, cpu))) return;
The fix is to check cpufreq_suspended first, if it's false, that means cpufreq_suspend was not called in the first place, so do not resume cpufreq.
Signed-off-by: Bo Yan byan@nvidia.com Acked-by: Viresh Kumar viresh.kumar@linaro.org [ rjw: Dropped printing a message ] Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Lee Jones lee.jones@linaro.org --- drivers/cpufreq/cpufreq.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index d43cd983a7ec..04101a6dc7f5 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1646,6 +1646,9 @@ void cpufreq_resume(void) if (!cpufreq_driver) return;
+ if (unlikely(!cpufreq_suspended)) + return; + cpufreq_suspended = false;
if (!has_target() && !cpufreq_driver->resume)
From: Dan Carpenter dan.carpenter@oracle.com
[ Upstream commit da22f0eea555baf9b0a84b52afe56db2052cfe8d ]
In olden times, closure_return() used to have a hidden return built in. We removed the hidden return but forgot to add a new return here. If "c" were NULL we would oops on the next line, but fortunately "c" is never NULL. Let's just remove the if statement.
Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Reviewed-by: Coly Li colyli@suse.de Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Lee Jones lee.jones@linaro.org --- drivers/md/bcache/super.c | 3 --- 1 file changed, 3 deletions(-)
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index c5bc3e5e921e..3e113be966fe 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -1397,9 +1397,6 @@ static void cache_set_flush(struct closure *cl) struct btree *b; unsigned i;
- if (!c) - closure_return(cl); - bch_cache_accounting_destroy(&c->accounting);
kobject_put(&c->internal);
On Fri, Nov 22, 2019 at 10:52:50AM +0000, Lee Jones wrote:
From: Dan Carpenter dan.carpenter@oracle.com
[ Upstream commit da22f0eea555baf9b0a84b52afe56db2052cfe8d ]
In olden times, closure_return() used to have a hidden return built in. We removed the hidden return but forgot to add a new return here. If "c" were NULL we would oops on the next line, but fortunately "c" is never NULL. Let's just remove the if statement.
So this doesn't actually fix anything?
From: Bart Van Assche bart.vanassche@sandisk.com
[ Upstream commit 2e91c3694181dc500faffec16c5aaa0ac5e15449 ]
After QUEUE_FLAG_DYING has been set any code that is waiting in get_request() should be woken up. But to get this behaviour blk_set_queue_dying() must be used instead of only setting QUEUE_FLAG_DYING.
Signed-off-by: Bart Van Assche bart.vanassche@sandisk.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Lee Jones lee.jones@linaro.org --- drivers/md/dm.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 2ffe7db75acb..36e6221fabab 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1946,9 +1946,7 @@ static void __dm_destroy(struct mapped_device *md, bool wait) set_bit(DMF_FREEING, &md->flags); spin_unlock(&_minor_lock);
- spin_lock_irq(q->queue_lock); - queue_flag_set(QUEUE_FLAG_DYING, q); - spin_unlock_irq(q->queue_lock); + blk_set_queue_dying(q);
if (dm_request_based(md) && md->kworker_task) kthread_flush_worker(&md->kworker);
From: Gang He ghe@suse.com
[ Upstream commit a634644751c46238df58bbfe992e30c1668388db ]
Remove ocfs2_is_o2cb_active(). We have similar functions to identify which cluster stack is being used via osb->osb_cluster_stack.
Secondly, the current implementation of ocfs2_is_o2cb_active() is not totally safe. Based on the design of stackglue, we need to get ocfs2_stack_lock before using ocfs2_stack related data structures, and that active_stack pointer can be NULL in the case of mount failure.
Link: http://lkml.kernel.org/r/1495441079-11708-1-git-send-email-ghe@suse.com Signed-off-by: Gang He ghe@suse.com Reviewed-by: Joseph Qi jiangqi903@gmail.com Reviewed-by: Eric Ren zren@suse.com Acked-by: Changwei Ge ge.changwei@h3c.com Cc: Mark Fasheh mark@fasheh.com Cc: Joel Becker jlbec@evilplan.org Cc: Junxiao Bi junxiao.bi@oracle.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Lee Jones lee.jones@linaro.org --- fs/ocfs2/dlmglue.c | 2 +- fs/ocfs2/stackglue.c | 6 ------ fs/ocfs2/stackglue.h | 3 --- 3 files changed, 1 insertion(+), 10 deletions(-)
diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c index 5729d55da67d..2c3e975126b3 100644 --- a/fs/ocfs2/dlmglue.c +++ b/fs/ocfs2/dlmglue.c @@ -3421,7 +3421,7 @@ static int ocfs2_downconvert_lock(struct ocfs2_super *osb, * we can recover correctly from node failure. Otherwise, we may get * invalid LVB in LKB, but without DLM_SBF_VALNOTVALID being set. */ - if (!ocfs2_is_o2cb_active() && + if (ocfs2_userspace_stack(osb) && lockres->l_ops->flags & LOCK_TYPE_USES_LVB) lvb = 1;
diff --git a/fs/ocfs2/stackglue.c b/fs/ocfs2/stackglue.c index 820359096c7a..52c07346bea3 100644 --- a/fs/ocfs2/stackglue.c +++ b/fs/ocfs2/stackglue.c @@ -48,12 +48,6 @@ static char ocfs2_hb_ctl_path[OCFS2_MAX_HB_CTL_PATH] = "/sbin/ocfs2_hb_ctl"; */ static struct ocfs2_stack_plugin *active_stack;
-inline int ocfs2_is_o2cb_active(void) -{ - return !strcmp(active_stack->sp_name, OCFS2_STACK_PLUGIN_O2CB); -} -EXPORT_SYMBOL_GPL(ocfs2_is_o2cb_active); - static struct ocfs2_stack_plugin *ocfs2_stack_lookup(const char *name) { struct ocfs2_stack_plugin *p; diff --git a/fs/ocfs2/stackglue.h b/fs/ocfs2/stackglue.h index e3036e1790e8..f2dce10fae54 100644 --- a/fs/ocfs2/stackglue.h +++ b/fs/ocfs2/stackglue.h @@ -298,9 +298,6 @@ void ocfs2_stack_glue_set_max_proto_version(struct ocfs2_protocol_version *max_p int ocfs2_stack_glue_register(struct ocfs2_stack_plugin *plugin); void ocfs2_stack_glue_unregister(struct ocfs2_stack_plugin *plugin);
-/* In ocfs2_downconvert_lock(), we need to know which stack we are using */ -int ocfs2_is_o2cb_active(void); - extern struct kset *ocfs2_kset;
#endif /* STACKGLUE_H */
From: Jan Kara jack@suse.cz
[ Upstream commit 3abb1a0fc2871f2db52199e1748a1d48a54a3427 ]
These days inode reclaim calls evict_inode() only when it has no pages in the mapping. In that case it is not necessary to wait for transaction commit in ext4_evict_inode() as there can be no pages waiting to be committed. So avoid unnecessary transaction waiting in that case.
We still have to keep the check for the case where ext4_evict_inode() gets called from other paths (e.g. umount) where inode still can have some page cache pages.
Reported-by: Johannes Weiner hannes@cmpxchg.org Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Lee Jones lee.jones@linaro.org --- fs/ext4/inode.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index a73056e06bde..2ad48d166f32 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -212,7 +212,8 @@ void ext4_evict_inode(struct inode *inode) */ if (inode->i_ino != EXT4_JOURNAL_INO && ext4_should_journal_data(inode) && - (S_ISLNK(inode->i_mode) || S_ISREG(inode->i_mode))) { + (S_ISLNK(inode->i_mode) || S_ISREG(inode->i_mode)) && + inode->i_data.nrpages) { journal_t *journal = EXT4_SB(inode->i_sb)->s_journal; tid_t commit_tid = EXT4_I(inode)->i_datasync_tid;
On Fri, Nov 22, 2019 at 10:52:53AM +0000, Lee Jones wrote:
From: Jan Kara jack@suse.cz
[ Upstream commit 3abb1a0fc2871f2db52199e1748a1d48a54a3427 ]
These days inode reclaim calls evict_inode() only when it has no pages in the mapping. In that case it is not necessary to wait for transaction commit in ext4_evict_inode() as there can be no pages waiting to be committed. So avoid unnecessary transaction waiting in that case.
We still have to keep the check for the case where ext4_evict_inode() gets called from other paths (e.g. umount) where inode still can have some page cache pages.
This reads to me like an optimization?
On Mon, 25 Nov 2019, Sasha Levin wrote:
On Fri, Nov 22, 2019 at 10:52:53AM +0000, Lee Jones wrote:
From: Jan Kara jack@suse.cz
[ Upstream commit 3abb1a0fc2871f2db52199e1748a1d48a54a3427 ]
These days inode reclaim calls evict_inode() only when it has no pages in the mapping. In that case it is not necessary to wait for transaction commit in ext4_evict_inode() as there can be no pages waiting to be committed. So avoid unnecessary transaction waiting in that case.
We still have to keep the check for the case where ext4_evict_inode() gets called from other paths (e.g. umount) where inode still can have some page cache pages.
This reads to me like an optimization?
That's okay. Just don't apply anything that isn't suitable.
I'll try to omit such cases in the future.
linux-stable-mirror@lists.linaro.org