lists.linaro.org
Sign In
Sign Up
Sign In
Sign Up
Manage this list
×
Keyboard Shortcuts
Thread View
j
: Next unread message
k
: Previous unread message
j a
: Jump to all threads
j l
: Jump to MailingList overview
2025
July
June
May
April
March
February
January
2024
December
November
October
September
August
July
June
May
April
March
February
January
2023
December
November
October
September
August
July
June
May
April
March
February
January
2022
December
November
October
September
August
July
June
May
April
March
February
January
2021
December
November
October
September
August
July
June
May
April
March
February
January
2020
December
November
October
September
August
July
June
May
April
March
February
January
2019
December
November
October
September
August
July
June
May
April
March
February
January
2018
December
November
October
September
August
July
June
May
April
March
February
January
2017
December
November
October
September
August
July
June
May
April
March
February
January
2016
December
November
October
September
August
July
June
May
April
March
February
January
2015
December
November
October
September
August
July
June
May
April
March
February
January
2014
December
November
October
September
August
July
June
May
April
March
February
January
2013
December
November
October
September
August
July
June
May
April
March
February
January
2012
December
November
October
September
August
July
June
May
April
March
February
January
2011
December
November
October
September
August
July
June
May
April
List overview
Download
Linaro-mm-sig
----- 2025 -----
July 2025
June 2025
May 2025
April 2025
March 2025
February 2025
January 2025
----- 2024 -----
December 2024
November 2024
October 2024
September 2024
August 2024
July 2024
June 2024
May 2024
April 2024
March 2024
February 2024
January 2024
----- 2023 -----
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
----- 2022 -----
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
----- 2021 -----
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
----- 2020 -----
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
----- 2019 -----
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
----- 2018 -----
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
----- 2017 -----
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
----- 2016 -----
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
----- 2015 -----
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
----- 2014 -----
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
----- 2013 -----
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
----- 2012 -----
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
----- 2011 -----
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
linaro-mm-sig@lists.linaro.org
2 participants
2906 discussions
Start a n
N
ew thread
[PATCH] dma-buf: fix racing conflict of dma_heap_add()
by Dawei Li
Racing conflict could be: task A task B list_for_each_entry strcmp(h->name)) list_for_each_entry strcmp(h->name) kzalloc kzalloc ...... ..... device_create device_create list_add list_add The root cause is that task B has no idea about the fact someone else(A) has inserted heap with same name when it calls list_add, so a potential collision occurs. Fixes: c02a81fba74f ("dma-buf: Add dma-buf heaps framework") base-commit: 447fb14bf07905b880c9ed1ea92c53d6dd0649d7 Signed-off-by: Dawei Li <set_pte_at(a)outlook.com> --- drivers/dma-buf/dma-heap.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c index 8f5848aa144f..ff44c2777b04 100644 --- a/drivers/dma-buf/dma-heap.c +++ b/drivers/dma-buf/dma-heap.c @@ -243,11 +243,12 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) return ERR_PTR(-EINVAL); } } - mutex_unlock(&heap_list_lock); heap = kzalloc(sizeof(*heap), GFP_KERNEL); - if (!heap) + if (!heap) { + mutex_unlock(&heap_list_lock); return ERR_PTR(-ENOMEM); + } heap->name = exp_info->name; heap->ops = exp_info->ops; @@ -284,7 +285,6 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) goto err2; } /* Add heap to the list */ - mutex_lock(&heap_list_lock); list_add(&heap->list, &heap_list); mutex_unlock(&heap_list_lock); @@ -296,6 +296,7 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) xa_erase(&dma_heap_minors, minor); err0: kfree(heap); + mutex_unlock(&heap_list_lock); return err_ret; } -- 2.25.1
2 years, 8 months
2
1
0
0
[PATCH v2] dma-buf: cma_heap: Fix typo in comment
by Mark-PK Tsai
Remove duplicated "by" from comment in cma_heap_allocate(). Signed-off-by: Mark-PK Tsai <mark-pk.tsai(a)mediatek.com> --- drivers/dma-buf/heaps/cma_heap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 28fb04eccdd0..cd386ce639f3 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -316,7 +316,7 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap, kunmap_atomic(vaddr); /* * Avoid wasting time zeroing memory if the process - * has been killed by by SIGKILL + * has been killed by SIGKILL */ if (fatal_signal_pending(current)) goto free_cma; -- 2.18.0
2 years, 8 months
3
3
0
0
[PATCH v3] dma-buf: cma_heap: Remove duplicated 'by' in comment
by Mark-PK Tsai
Remove duplicated 'by' from comment in cma_heap_allocate(). Signed-off-by: Mark-PK Tsai <mark-pk.tsai(a)mediatek.com> --- drivers/dma-buf/heaps/cma_heap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 28fb04eccdd0..cd386ce639f3 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -316,7 +316,7 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap, kunmap_atomic(vaddr); /* * Avoid wasting time zeroing memory if the process - * has been killed by by SIGKILL + * has been killed by SIGKILL */ if (fatal_signal_pending(current)) goto free_cma; -- 2.18.0
2 years, 8 months
4
3
0
0
etnaviv OOPS, NULL pointer dereference on Linux 6.0.2
by Francesco Dolcini
Hello all, I got the following Oops, on a Apalis iMX6 Dual with 512MB RAM, running glmark2 tests with the system under memory pressure (OOM Killer!). It's not something systematic and I cannot tell if this is a regression or not, any suggestion? The system just froze afterward. [ 0.000000] Booting Linux on physical CPU 0x0 [ 0.000000] Linux version 6.0.2-6.1.0-devel+git.dab08f7eecdf (oe-user@oe-host) (arm-tdx-linux-gnueabi-gcc (GCC) 11.3.0, GNU ld (GNU Binutils) 2.38.20220708) #1 SMP Sat Oct 15 06:02:59 UTC 2022 [ 0.000000] CPU: ARMv7 Processor [412fc09a] revision 10 (ARMv7), cr=10c5387d [ 0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache [ 0.000000] OF: fdt: Machine model: Toradex Apalis iMX6Q/D Module on Ixora Carrier Board V1.1 ... [ 1.749471] etnaviv etnaviv: bound 130000.gpu (ops gpu_ops) [ 1.750527] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops) [ 1.751522] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops) [ 1.751566] etnaviv-gpu 130000.gpu: model: GC2000, revision: 5108 [ 1.753141] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007 [ 1.753392] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215 [ 1.753421] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0 [ 1.756559] [drm] Initialized etnaviv 1.3.0 20151214 for etnaviv on minor 0 ... [ 480.994256] Out of memory: Killed process 1740 (Qt5_CinematicEx) total-vm:242656kB, anon-rss:105212kB, file-rss:9864kB, shmem-rss:1304kB, UID:0 pgtables:192kB oom_score_adj:0 [ 481.068691] 8<--- cut here --- [ 481.072037] Unable to handle kernel NULL pointer dereference at virtual address 00000004 [ 481.080366] [00000004] *pgd=00000000 [ 481.083994] Internal error: Oops: 805 [#1] SMP ARM [ 481.088813] Modules linked in: 8021q imx_sdma virt_dma coda_vpu v4l2_jpeg imx_vdoa dw_hdmi_ahb_audio fuse [ 481.098458] CPU: 1 PID: 1755 Comm: QSGRenderThread Not tainted 6.0.2-6.1.0-devel+git.dab08f7eecdf #1 [ 481.107619] Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree) [ 481.114157] PC is at etnaviv_gem_free_object+0x40/0x128 [ 481.119412] LR is at lock_is_held_type+0xa4/0x15c [ 481.124138] pc : [<c0787f90>] lr : [<c0e46250>] psr: 60030113 [ 481.130421] sp : e1155da8 ip : 00000000 fp : 0000000c [ 481.135670] r10: c34ef400 r9 : c262066c r8 : 00000122 [ 481.140916] r7 : c2153000 r6 : c2153000 r5 : 00000870 r4 : c25f24a0 [ 481.147460] r3 : 00000000 r2 : 00000000 r1 : 00000100 r0 : 00000000 [ 481.153997] Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none [ 481.161143] Control: 10c5387d Table: 2caf004a DAC: 00000051 [ 481.166896] Register r0 information: NULL pointer [ 481.171615] Register r1 information: non-paged memory [ 481.176694] Register r2 information: NULL pointer [ 481.181429] Register r3 information: NULL pointer [ 481.181441] Register r4 information: slab kmalloc-128 start c25f2480 pointer offset 32 size 128 [ 481.186173] Register r5 information: non-paged memory [ 481.186181] Register r6 information: slab kmalloc-512 start c2153000 pointer offset 0 size 512 [ 481.199953] Register r7 information: slab kmalloc-512 start c2153000 pointer offset 0 size 512 [ 481.199975] Register r8 information: non-paged memory [ 481.199983] Register r9 information: slab kmalloc-2k start c2620000 pointer offset 1644 size 2048 [ 481.222276] Register r10 information: slab kmalloc-1k start c34ef400 pointer offset 0 size 1024 [ 481.222297] Register r11 information: non-paged memory [ 481.245038] Register r12 information: NULL pointer [ 481.245056] Process QSGRenderThread (pid: 1755, stack limit = 0xd30acffa) [ 481.245070] Stack: (0xe1155da8 to 0xe1156000) [ 481.245084] 5da0: c0787f50 fffffff4 00000870 c8102400 c212f000 c2620000 [ 481.245094] 5dc0: c262066c c34ef400 0000000c c078693c 00000003 00000000 00000000 c07e2960 [ 481.245103] 5de0: c2153000 042beef1 c2153a00 c2620000 c8102400 c8102940 c34ef5e4 c07324ec [ 481.245112] 5e00: c262066c c34ef400 0000000c c2153a00 c2620000 c34ef400 e1155e6c c0732a5c [ 481.245121] 5e20: c00c642e 0000000c c212f000 0000000c c0f6d968 e1155e6c c34ef400 c0723448 [ 481.245130] 5e40: 0000e280 00000001 c12b5820 c212f000 aed13e60 e1155e6c 0000002e c2f39000 [ 481.245138] 5e60: c0732e44 00000051 00000000 00000000 00000000 0000000c c212f000 c212f7a0 [ 481.318348] 5e80: 00000000 c018b61c c212f000 c16e0d20 c03514d8 c156155c 60070013 c0193790 [ 481.318369] 5ea0: b28a3000 00000254 c6d00280 00000001 00000000 042beef1 00000009 00004000 [ 481.318378] 5ec0: c212f000 c3caf280 00000001 c2f39000 00000028 c03514f0 00000000 00000000 [ 481.342916] 5ee0: c03513f0 c212f7a0 00000000 042beef1 aed13e60 c00c642e c2f39001 c0100080 [ 481.342928] 5f00: aed13e60 c212f000 c2f39000 c25b8710 00000009 c0342234 00000000 042beef1 [ 481.342938] 5f20: c36866e0 80000007 c212f000 b28a311c c3686680 c36866e0 e1155fb0 80000007 [ 481.342948] 5f40: c212f000 c0e516b0 aefd7cd0 c01d34b0 000001e0 00000000 00000000 00000000 [ 481.342958] 5f60: 00000193 00000007 c160fd90 b28a311c e1155fb0 c0e51500 0000021c 042beef1 [ 481.342969] 5f80: adf87818 aed13e90 aed13e60 c00c642e 00000036 c01002b4 c212f000 00000036 [ 481.342978] 5fa0: adf87818 c0100080 aed13e90 aed13e60 00000009 c00c642e aed13e60 aed13e40 [ 481.342988] 5fc0: aed13e90 aed13e60 c00c642e 00000036 00000001 0000021c 00870000 adf87818 [ 481.408411] 5fe0: 00000036 aed13e28 b6088089 b6001ae6 60070030 00000009 00000000 00000000 [ 481.408431] etnaviv_gem_free_object from etnaviv_gem_prime_import_sg_table+0x12c/0x160 [ 481.408469] etnaviv_gem_prime_import_sg_table from drm_gem_prime_import_dev+0x98/0x150 [ 481.408509] drm_gem_prime_import_dev from drm_gem_prime_fd_to_handle+0x188/0x1f8 [ 481.408528] drm_gem_prime_fd_to_handle from drm_ioctl+0x1e8/0x3a0 [ 481.408545] drm_ioctl from sys_ioctl+0x530/0xdbc [ 481.408571] sys_ioctl from ret_fast_syscall+0x0/0x1c [ 481.408587] Exception stack(0xe1155fa8 to 0xe1155ff0) [ 481.408599] 5fa0: aed13e90 aed13e60 00000009 c00c642e aed13e60 aed13e40 [ 481.408608] 5fc0: aed13e90 aed13e60 c00c642e 00000036 00000001 0000021c 00870000 adf87818 [ 481.477625] 5fe0: 00000036 aed13e28 b6088089 b6001ae6 [ 481.477641] Code: e5962174 e59f80e0 e3a01c01 e1a07006 (e5823004) [ 481.477819] ---[ end trace 0000000000000000 ]--- Francesco
2 years, 8 months
1
1
0
0
[RFC][PATCH v2 12/31] timers: dma-buf: Use del_timer_shutdown() before freeing timer
by Steven Rostedt
From: "Steven Rostedt (Google)" <rostedt(a)goodmis.org> Before a timer is freed, del_timer_shutdown() must be called. Link:
https://lore.kernel.org/all/20220407161745.7d6754b3@gandalf.local.home/
Cc: Sumit Semwal <sumit.semwal(a)linaro.org> Cc: "Christian König" <christian.koenig(a)amd.com> Cc: linux-media(a)vger.kernel.org Cc: dri-devel(a)lists.freedesktop.org Cc: linaro-mm-sig(a)lists.linaro.org Signed-off-by: Steven Rostedt (Google) <rostedt(a)goodmis.org> --- drivers/dma-buf/st-dma-fence.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/dma-buf/st-dma-fence.c b/drivers/dma-buf/st-dma-fence.c index fb6e0a6ae2c9..c67b70205b6f 100644 --- a/drivers/dma-buf/st-dma-fence.c +++ b/drivers/dma-buf/st-dma-fence.c @@ -412,7 +412,7 @@ static int test_wait_timeout(void *arg) err = 0; err_free: - del_timer_sync(&wt.timer); + del_timer_shutdown(&wt.timer); destroy_timer_on_stack(&wt.timer); dma_fence_signal(wt.f); dma_fence_put(wt.f); -- 2.35.1
2 years, 8 months
1
1
0
0
[PATCH] dma-buf: cma_heap: Fix typo in comment
by Mark-PK Tsai
Fix typo in comment. Signed-off-by: Mark-PK Tsai <mark-pk.tsai(a)mediatek.com> --- drivers/dma-buf/heaps/cma_heap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 28fb04eccdd0..cd386ce639f3 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -316,7 +316,7 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap, kunmap_atomic(vaddr); /* * Avoid wasting time zeroing memory if the process - * has been killed by by SIGKILL + * has been killed by SIGKILL */ if (fatal_signal_pending(current)) goto free_cma; -- 2.18.0
2 years, 8 months
3
2
0
0
Re: [syzbot] KASAN: use-after-free Read in task_work_run (2)
by syzbot
syzbot has found a reproducer for the following issue on: HEAD commit: 88619e77b33d net: stmmac: rk3588: Allow multiple gmac cont.. git tree: bpf console output:
https://syzkaller.appspot.com/x/log.txt?x=1646d6f2880000
kernel config:
https://syzkaller.appspot.com/x/.config?x=a66c6c673fb555e8
dashboard link:
https://syzkaller.appspot.com/bug?extid=9228d6098455bb209ec8
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2 syz repro:
https://syzkaller.appspot.com/x/repro.syz?x=12bc425e880000
C reproducer:
https://syzkaller.appspot.com/x/repro.c?x=1126516e880000
Downloadable assets: disk image:
https://storage.googleapis.com/syzbot-assets/f8435d5c2c21/disk-88619e77.raw…
vmlinux:
https://storage.googleapis.com/syzbot-assets/551d8a013e81/vmlinux-88619e77.…
kernel image:
https://storage.googleapis.com/syzbot-assets/7d3f5c29064d/bzImage-88619e77.…
IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+9228d6098455bb209ec8(a)syzkaller.appspotmail.com ================================================================== BUG: KASAN: use-after-free in task_work_run+0x1b0/0x270 kernel/task_work.c:178 Read of size 8 at addr ffff8880752b1c18 by task syz-executor361/3766 CPU: 0 PID: 3766 Comm: syz-executor361 Not tainted 6.1.0-rc2-syzkaller-00073-g88619e77b33d #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/11/2022 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:284 [inline] print_report+0x15e/0x45d mm/kasan/report.c:395 kasan_report+0xbb/0x1f0 mm/kasan/report.c:495 task_work_run+0x1b0/0x270 kernel/task_work.c:178 exit_task_work include/linux/task_work.h:38 [inline] do_exit+0xb35/0x2a20 kernel/exit.c:820 do_group_exit+0xd0/0x2a0 kernel/exit.c:950 get_signal+0x21a1/0x2430 kernel/signal.c:2858 arch_do_signal_or_restart+0x82/0x2300 arch/x86/kernel/signal.c:869 exit_to_user_mode_loop kernel/entry/common.c:168 [inline] exit_to_user_mode_prepare+0x15f/0x250 kernel/entry/common.c:203 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline] syscall_exit_to_user_mode+0x19/0x50 kernel/entry/common.c:296 do_syscall_64+0x42/0xb0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7fb9f674b089 Code: Unable to access opcode bytes at 0x7fb9f674b05f. RSP: 002b:00007fb9f66fb318 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca RAX: 0000000000000001 RBX: 00007fb9f67da1a8 RCX: 00007fb9f674b089 RDX: 00000000000f4240 RSI: 0000000000000081 RDI: 00007fb9f67da1ac RBP: 00007fb9f67da1a0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000003100000400 R13: 00007fff658570cf R14: 00007fb9f66fb400 R15: 0000000000022000 </TASK> Allocated by task 3766: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:45 kasan_set_track+0x21/0x30 mm/kasan/common.c:52 __kasan_slab_alloc+0x7e/0x80 mm/kasan/common.c:325 kasan_slab_alloc include/linux/kasan.h:201 [inline] slab_post_alloc_hook mm/slab.h:737 [inline] slab_alloc_node mm/slub.c:3398 [inline] kmem_cache_alloc_node+0x2fc/0x400 mm/slub.c:3443 perf_event_alloc.part.0+0x69/0x3bc0 kernel/events/core.c:11625 perf_event_alloc kernel/events/core.c:12174 [inline] __do_sys_perf_event_open+0x4ae/0x32d0 kernel/events/core.c:12272 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd Freed by task 0: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:45 kasan_set_track+0x21/0x30 mm/kasan/common.c:52 kasan_save_free_info+0x2a/0x40 mm/kasan/generic.c:511 ____kasan_slab_free mm/kasan/common.c:236 [inline] ____kasan_slab_free+0x160/0x1c0 mm/kasan/common.c:200 kasan_slab_free include/linux/kasan.h:177 [inline] slab_free_hook mm/slub.c:1724 [inline] slab_free_freelist_hook+0x8b/0x1c0 mm/slub.c:1750 slab_free mm/slub.c:3661 [inline] kmem_cache_free+0xea/0x5b0 mm/slub.c:3683 rcu_do_batch kernel/rcu/tree.c:2250 [inline] rcu_core+0x81f/0x1980 kernel/rcu/tree.c:2510 __do_softirq+0x1f7/0xad8 kernel/softirq.c:571 Last potentially related work creation: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:45 __kasan_record_aux_stack+0xbc/0xd0 mm/kasan/generic.c:481 call_rcu+0x99/0x820 kernel/rcu/tree.c:2798 put_event kernel/events/core.c:5095 [inline] perf_event_release_kernel+0x6f2/0x940 kernel/events/core.c:5210 perf_release+0x33/0x40 kernel/events/core.c:5220 __fput+0x27c/0xa90 fs/file_table.c:320 task_work_run+0x16b/0x270 kernel/task_work.c:179 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x23c/0x250 kernel/entry/common.c:203 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline] syscall_exit_to_user_mode+0x19/0x50 kernel/entry/common.c:296 do_syscall_64+0x42/0xb0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x63/0xcd Second to last potentially related work creation: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:45 __kasan_record_aux_stack+0xbc/0xd0 mm/kasan/generic.c:481 task_work_add+0x7b/0x2c0 kernel/task_work.c:48 event_sched_out+0xe35/0x1190 kernel/events/core.c:2294 __perf_remove_from_context+0x87/0xc40 kernel/events/core.c:2359 event_function+0x29e/0x3e0 kernel/events/core.c:254 remote_function kernel/events/core.c:92 [inline] remote_function+0x11e/0x1a0 kernel/events/core.c:72 __flush_smp_call_function_queue+0x205/0x9a0 kernel/smp.c:630 __sysvec_call_function_single+0xca/0x4d0 arch/x86/kernel/smp.c:248 sysvec_call_function_single+0x8e/0xc0 arch/x86/kernel/smp.c:243 asm_sysvec_call_function_single+0x16/0x20 arch/x86/include/asm/idtentry.h:657 The buggy address belongs to the object at ffff8880752b17c0 which belongs to the cache perf_event of size 1392 The buggy address is located 1112 bytes inside of 1392-byte region [ffff8880752b17c0, ffff8880752b1d30) The buggy address belongs to the physical page: page:ffffea0001d4ac00 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x752b0 head:ffffea0001d4ac00 order:3 compound_mapcount:0 compound_pincount:0 flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000010200 0000000000000000 dead000000000122 ffff8880118c23c0 raw: 0000000000000000 0000000080150015 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 3754, tgid 3753 (syz-executor361), ts 58662170660, free_ts 58383135648 prep_new_page mm/page_alloc.c:2538 [inline] get_page_from_freelist+0x10b5/0x2d50 mm/page_alloc.c:4287 __alloc_pages+0x1c7/0x5a0 mm/page_alloc.c:5554 alloc_pages+0x1a6/0x270 mm/mempolicy.c:2285 alloc_slab_page mm/slub.c:1794 [inline] allocate_slab+0x213/0x300 mm/slub.c:1939 new_slab mm/slub.c:1992 [inline] ___slab_alloc+0xa91/0x1400 mm/slub.c:3180 __slab_alloc.constprop.0+0x56/0xa0 mm/slub.c:3279 slab_alloc_node mm/slub.c:3364 [inline] kmem_cache_alloc_node+0x189/0x400 mm/slub.c:3443 perf_event_alloc.part.0+0x69/0x3bc0 kernel/events/core.c:11625 perf_event_alloc kernel/events/core.c:12174 [inline] __do_sys_perf_event_open+0x4ae/0x32d0 kernel/events/core.c:12272 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1458 [inline] free_pcp_prepare+0x65c/0xd90 mm/page_alloc.c:1508 free_unref_page_prepare mm/page_alloc.c:3386 [inline] free_unref_page+0x19/0x4d0 mm/page_alloc.c:3482 __unfreeze_partials+0x17c/0x1a0 mm/slub.c:2586 qlink_free mm/kasan/quarantine.c:168 [inline] qlist_free_all+0x6a/0x170 mm/kasan/quarantine.c:187 kasan_quarantine_reduce+0x180/0x200 mm/kasan/quarantine.c:294 __kasan_slab_alloc+0x62/0x80 mm/kasan/common.c:302 kasan_slab_alloc include/linux/kasan.h:201 [inline] slab_post_alloc_hook mm/slab.h:737 [inline] slab_alloc_node mm/slub.c:3398 [inline] slab_alloc mm/slub.c:3406 [inline] __kmem_cache_alloc_lru mm/slub.c:3413 [inline] kmem_cache_alloc+0x2ac/0x3c0 mm/slub.c:3422 kmem_cache_zalloc include/linux/slab.h:702 [inline] alloc_buffer_head+0x20/0x140 fs/buffer.c:2899 alloc_page_buffers+0x280/0x790 fs/buffer.c:829 create_empty_buffers+0x2c/0xf20 fs/buffer.c:1543 ext4_block_write_begin+0x10a7/0x15f0 fs/ext4/inode.c:1074 ext4_da_write_begin+0x44c/0xb50 fs/ext4/inode.c:3003 generic_perform_write+0x252/0x570 mm/filemap.c:3753 ext4_buffered_write_iter+0x15b/0x460 fs/ext4/file.c:285 ext4_file_write_iter+0x8b8/0x16e0 fs/ext4/file.c:700 __kernel_write_iter+0x25e/0x730 fs/read_write.c:517 Memory state around the buggy address: ffff8880752b1b00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880752b1b80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff8880752b1c00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8880752b1c80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880752b1d00: fb fb fb fb fb fb fc fc fc fc fc fc fc fc fc fc ==================================================================
2 years, 8 months
1
0
0
0
[PATCH] parport_pc: Remove WCH CH382 PCI-E single parallel port card.
by Zhang Xincheng
WCH CH382L PCI-E adapter with 1 parallel port has been included inside parport_serial. Signed-off-by: Zhang Xincheng <zhangxincheng(a)uniontech.com> --- drivers/parport/parport_pc.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c index 7c45927e2131..cf0cefe38e90 100644 --- a/drivers/parport/parport_pc.c +++ b/drivers/parport/parport_pc.c @@ -2613,7 +2613,6 @@ enum parport_pc_pci_cards { netmos_9901, netmos_9865, quatech_sppxp100, - wch_ch382l, }; @@ -2677,7 +2676,6 @@ static struct parport_pc_pci { /* netmos_9901 */ { 1, { { 0, -1 }, } }, /* netmos_9865 */ { 1, { { 0, -1 }, } }, /* quatech_sppxp100 */ { 1, { { 0, 1 }, } }, - /* wch_ch382l */ { 1, { { 2, -1 }, } }, }; static const struct pci_device_id parport_pc_pci_tbl[] = { @@ -2769,8 +2767,6 @@ static const struct pci_device_id parport_pc_pci_tbl[] = { /* Quatech SPPXP-100 Parallel port PCI ExpressCard */ { PCI_VENDOR_ID_QUATECH, PCI_DEVICE_ID_QUATECH_SPPXP_100, PCI_ANY_ID, PCI_ANY_ID, 0, 0, quatech_sppxp100 }, - /* WCH CH382L PCI-E single parallel port card */ - { 0x1c00, 0x3050, 0x1c00, 0x3050, 0, 0, wch_ch382l }, { 0, } /* terminate list */ }; MODULE_DEVICE_TABLE(pci, parport_pc_pci_tbl); -- 2.20.1
2 years, 8 months
1
0
0
0
[PATCH v4] drm/sched: Fix kernel NULL pointer dereference error
by Arvind Yadav
-This is purely a timing issue. Here, sometimes Job free is happening before the job is done. To fix this issue moving 'dma_fence_cb' callback from job(struct drm_sched_job) to scheduler fence (struct drm_sched_fence). -Added drm_sched_fence_set_parent() and drm_sched_fence_clear_parent() functions to move fence handling into sched_fence.c and this just cleanup. BUG: kernel NULL pointer dereference, address: 0000000000000088 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 0 P4D 0 Oops: 0000 [#1] PREEMPT SMP NOPTI CPU: 2 PID: 0 Comm: swapper/2 Not tainted 6.0.0-rc2-custom #1 Hardware name: AMD Dibbler/Dibbler, BIOS RDB1107CC 09/26/2018 RIP: 0010:drm_sched_job_done.isra.0+0x11/0x140 [gpu_sched] Code: 8b fe ff ff be 03 00 00 00 e8 7b da b7 e3 e9 d4 fe ff ff 66 0f 1f 44 00 00 0f 1f 44 00 00 55 48 89 e5 41 55 41 54 49 89 fc 53 <48> 8b 9f 88 00 00 00 f0 ff 8b f0 00 00 00 48 8b 83 80 01 00 00 f0 RSP: 0018:ffffb1b1801d4d38 EFLAGS: 00010087 RAX: ffffffffc0aa48b0 RBX: ffffb1b1801d4d70 RCX: 0000000000000018 RDX: 000036c70afb7c1d RSI: ffff8a45ca413c60 RDI: 0000000000000000 RBP: ffffb1b1801d4d50 R08: 00000000000000b5 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: ffffb1b1801d4d70 R14: ffff8a45c4160000 R15: ffff8a45c416a708 FS: 0000000000000000(0000) GS:ffff8a48a0a80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000088 CR3: 000000014ad50000 CR4: 00000000003506e0 Call Trace: <IRQ> drm_sched_job_done_cb+0x12/0x20 [gpu_sched] dma_fence_signal_timestamp_locked+0x7e/0x110 dma_fence_signal+0x31/0x60 amdgpu_fence_process+0xc4/0x140 [amdgpu] gfx_v9_0_eop_irq+0x9d/0xd0 [amdgpu] amdgpu_irq_dispatch+0xb7/0x210 [amdgpu] amdgpu_ih_process+0x86/0x100 [amdgpu] amdgpu_irq_handler+0x24/0x60 [amdgpu] __handle_irq_event_percpu+0x4b/0x190 handle_irq_event_percpu+0x15/0x50 handle_irq_event+0x39/0x60 handle_edge_irq+0xaf/0x210 __common_interrupt+0x6e/0x110 common_interrupt+0xc1/0xe0 </IRQ> <TASK> Signed-off-by: Arvind Yadav <Arvind.Yadav(a)amd.com> --- Changes in v2: Moving 'dma_fence_cb' callback from job(struct drm_sched_job) to scheduler fence(struct drm_sched_fence) instead of adding NULL check for s_fence. Changes in v3: Added drm_sched_fence_set_parent() function(and others *_parent_cb) in sched_fence.c. Moved parent fence intilization and callback installation into this (this just cleanup). Changes in v4 : Add drm_sched_fence_clear_parent() function in sched_fence.c. and done the changes as per review comments. --- drivers/gpu/drm/scheduler/sched_fence.c | 64 +++++++++++++++++++++++++ drivers/gpu/drm/scheduler/sched_main.c | 53 ++++---------------- include/drm/gpu_scheduler.h | 10 +++- 3 files changed, 81 insertions(+), 46 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index 7fd869520ef2..68343614f9ed 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -78,6 +78,70 @@ static void drm_sched_fence_free_rcu(struct rcu_head *rcu) kmem_cache_free(sched_fence_slab, fence); } +/** + * drm_sched_fence_parent_cb - the callback for a done job + * @f: fence + * @cb: fence callbacks + */ +static void drm_sched_fence_parent_cb(struct dma_fence *f, struct dma_fence_cb *cb) +{ + struct drm_sched_fence *s_fence = container_of(cb, struct drm_sched_fence, + cb); + struct drm_gpu_scheduler *sched = s_fence->sched; + + atomic_dec(&sched->hw_rq_count); + atomic_dec(sched->score); + + dma_fence_get(&s_fence->finished); + drm_sched_fence_finished(s_fence); + dma_fence_put(&s_fence->finished); + wake_up_interruptible(&sched->wake_up_worker); +} + +/** + * drm_sched_fence_clear_parent - Remove callbacks from pending list + * @s_fence: pointer to the fence + * + * Remove callbacks from pending list and clear the parent fence. + */ +bool drm_sched_fence_clear_parent(struct drm_sched_fence *s_fence) +{ + if (s_fence->parent && + dma_fence_remove_callback(s_fence->parent, &s_fence->cb)) { + dma_fence_put(s_fence->parent); + s_fence->parent = NULL; + return true; + } + + return false; +} + +/** + * drm_sched_fence_set_parent - set the parent fence and add the callback + * @s_fence: pointer to the fence + * fence: pointer to the hw fence + * + * Set the parent fence and install the callback for a done job. + */ +void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence, + struct dma_fence *fence) +{ + int r; + + if (s_fence->parent && + dma_fence_remove_callback(s_fence->parent, &s_fence->cb)) + dma_fence_put(s_fence->parent); + + /* We keep the reference of the parent fence here. */ + swap(s_fence->parent, fence); + dma_fence_put(fence); + + r = dma_fence_add_callback(s_fence->parent, &s_fence->cb, + drm_sched_fence_parent_cb); + if (r == -ENOENT) + drm_sched_fence_parent_cb(NULL, &s_fence->cb); +} + /** * drm_sched_fence_free - free up an uninitialized fence * diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 4cc59bae38dd..30597d9a949f 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -253,13 +253,12 @@ drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq) /** * drm_sched_job_done - complete a job - * @s_job: pointer to the job which is done + * @s_fence: pointer to the fence of a done job * * Finish the job's fence and wake up the worker thread. */ -static void drm_sched_job_done(struct drm_sched_job *s_job) +static void drm_sched_job_done(struct drm_sched_fence *s_fence) { - struct drm_sched_fence *s_fence = s_job->s_fence; struct drm_gpu_scheduler *sched = s_fence->sched; atomic_dec(&sched->hw_rq_count); @@ -273,18 +272,6 @@ static void drm_sched_job_done(struct drm_sched_job *s_job) wake_up_interruptible(&sched->wake_up_worker); } -/** - * drm_sched_job_done_cb - the callback for a done job - * @f: fence - * @cb: fence callbacks - */ -static void drm_sched_job_done_cb(struct dma_fence *f, struct dma_fence_cb *cb) -{ - struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb); - - drm_sched_job_done(s_job); -} - /** * drm_sched_dependency_optimized - test if the dependency can be optimized * @@ -504,11 +491,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) */ list_for_each_entry_safe_reverse(s_job, tmp, &sched->pending_list, list) { - if (s_job->s_fence->parent && - dma_fence_remove_callback(s_job->s_fence->parent, - &s_job->cb)) { - dma_fence_put(s_job->s_fence->parent); - s_job->s_fence->parent = NULL; + if (drm_sched_fence_clear_parent(s_job->s_fence)) { atomic_dec(&sched->hw_rq_count); } else { /* @@ -560,7 +543,6 @@ EXPORT_SYMBOL(drm_sched_stop); void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) { struct drm_sched_job *s_job, *tmp; - int r; /* * Locking the list is not required here as the sched thread is parked @@ -575,16 +557,10 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) if (!full_recovery) continue; - if (fence) { - r = dma_fence_add_callback(fence, &s_job->cb, - drm_sched_job_done_cb); - if (r == -ENOENT) - drm_sched_job_done(s_job); - else if (r) - DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", - r); - } else - drm_sched_job_done(s_job); + if (fence) + drm_sched_fence_set_parent(s_job->s_fence, fence); + else + drm_sched_job_done(s_job->s_fence); } if (full_recovery) { @@ -1008,7 +984,6 @@ static bool drm_sched_blocked(struct drm_gpu_scheduler *sched) static int drm_sched_main(void *param) { struct drm_gpu_scheduler *sched = (struct drm_gpu_scheduler *)param; - int r; sched_set_fifo_low(current); @@ -1049,22 +1024,12 @@ static int drm_sched_main(void *param) drm_sched_fence_scheduled(s_fence); if (!IS_ERR_OR_NULL(fence)) { - s_fence->parent = dma_fence_get(fence); - /* Drop for original kref_init of the fence */ - dma_fence_put(fence); - - r = dma_fence_add_callback(fence, &sched_job->cb, - drm_sched_job_done_cb); - if (r == -ENOENT) - drm_sched_job_done(sched_job); - else if (r) - DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", - r); + drm_sched_fence_set_parent(s_fence, fence); } else { if (IS_ERR(fence)) dma_fence_set_error(&s_fence->finished, PTR_ERR(fence)); - drm_sched_job_done(sched_job); + drm_sched_job_done(s_fence); } wake_up(&sched->job_scheduled); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 1f7d9dd1a444..5066729c15ce 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -281,6 +281,10 @@ struct drm_sched_fence { * @owner: job owner for debugging */ void *owner; + /** + * @cb: callback + */ + struct dma_fence_cb cb; }; struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); @@ -300,7 +304,6 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); * be scheduled further. * @s_priority: the priority of the job. * @entity: the entity to which this job belongs. - * @cb: the callback for the parent fence in s_fence. * * A job is created by the driver using drm_sched_job_init(), and * should call drm_sched_entity_push_job() once it wants the scheduler @@ -325,7 +328,6 @@ struct drm_sched_job { atomic_t karma; enum drm_sched_priority s_priority; struct drm_sched_entity *entity; - struct dma_fence_cb cb; /** * @dependencies: * @@ -559,6 +561,10 @@ void drm_sched_fence_free(struct drm_sched_fence *fence); void drm_sched_fence_scheduled(struct drm_sched_fence *fence); void drm_sched_fence_finished(struct drm_sched_fence *fence); +bool drm_sched_fence_clear_parent(struct drm_sched_fence *s_fence); +void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence, + struct dma_fence *fence); + unsigned long drm_sched_suspend_timeout(struct drm_gpu_scheduler *sched); void drm_sched_resume_timeout(struct drm_gpu_scheduler *sched, unsigned long remaining); -- 2.25.1
2 years, 8 months
1
0
0
0
[PATCH v3] drm/sched: Fix kernel NULL pointer dereference error
by Arvind Yadav
-This is purely a timing issue. Here, sometimes Job free is happening before the job is done. To fix this issue moving 'dma_fence_cb' callback from job(struct drm_sched_job) to scheduler fence (struct drm_sched_fence). - Added drm_sched_fence_set_parent() function(and others *_parent_cb) in sched_fence.c. Moved parent fence intilization and callback installation into this (this just cleanup). BUG: kernel NULL pointer dereference, address: 0000000000000088 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 0 P4D 0 Oops: 0000 [#1] PREEMPT SMP NOPTI CPU: 2 PID: 0 Comm: swapper/2 Not tainted 6.0.0-rc2-custom #1 Arvind : [dma_fence_default_wait _START] timeout = -1 Hardware name: AMD Dibbler/Dibbler, BIOS RDB1107CC 09/26/2018 RIP: 0010:drm_sched_job_done.isra.0+0x11/0x140 [gpu_sched] Code: 8b fe ff ff be 03 00 00 00 e8 7b da b7 e3 e9 d4 fe ff ff 66 0f 1f 44 00 00 0f 1f 44 00 00 55 48 89 e5 41 55 41 54 49 89 fc 53 <48> 8b 9f 88 00 00 00 f0 ff 8b f0 00 00 00 48 8b 83 80 01 00 00 f0 RSP: 0018:ffffb1b1801d4d38 EFLAGS: 00010087 RAX: ffffffffc0aa48b0 RBX: ffffb1b1801d4d70 RCX: 0000000000000018 RDX: 000036c70afb7c1d RSI: ffff8a45ca413c60 RDI: 0000000000000000 RBP: ffffb1b1801d4d50 R08: 00000000000000b5 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: ffffb1b1801d4d70 R14: ffff8a45c4160000 R15: ffff8a45c416a708 FS: 0000000000000000(0000) GS:ffff8a48a0a80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000088 CR3: 000000014ad50000 CR4: 00000000003506e0 Call Trace: <IRQ> drm_sched_job_done_cb+0x12/0x20 [gpu_sched] dma_fence_signal_timestamp_locked+0x7e/0x110 dma_fence_signal+0x31/0x60 amdgpu_fence_process+0xc4/0x140 [amdgpu] gfx_v9_0_eop_irq+0x9d/0xd0 [amdgpu] amdgpu_irq_dispatch+0xb7/0x210 [amdgpu] amdgpu_ih_process+0x86/0x100 [amdgpu] amdgpu_irq_handler+0x24/0x60 [amdgpu] __handle_irq_event_percpu+0x4b/0x190 handle_irq_event_percpu+0x15/0x50 handle_irq_event+0x39/0x60 handle_edge_irq+0xaf/0x210 __common_interrupt+0x6e/0x110 common_interrupt+0xc1/0xe0 </IRQ> <TASK> Signed-off-by: Arvind Yadav <Arvind.Yadav(a)amd.com> --- Changes in v2: Moving 'dma_fence_cb' callback from job(struct drm_sched_job) to scheduler fence(struct drm_sched_fence) instead of adding NULL check for s_fence. Changes in v3: Added drm_sched_fence_set_parent() function(and others *_parent_cb) in sched_fence.c. Moved parent fence intilization and callback installation into this (this just cleanup). --- drivers/gpu/drm/scheduler/sched_fence.c | 53 +++++++++++++++++++++++++ drivers/gpu/drm/scheduler/sched_main.c | 38 +++++------------- include/drm/gpu_scheduler.h | 12 +++++- 3 files changed, 72 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index 7fd869520ef2..f6808f363261 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -77,6 +77,59 @@ static void drm_sched_fence_free_rcu(struct rcu_head *rcu) if (!WARN_ON_ONCE(!fence)) kmem_cache_free(sched_fence_slab, fence); } +/** + * drm_sched_job_done_cb - the callback for a done job + * @f: fence + * @cb: fence callbacks + */ +static void drm_sched_job_done_cb(struct dma_fence *f, struct dma_fence_cb *cb) +{ + struct drm_sched_fence *s_fence = container_of(cb, struct drm_sched_fence, + cb); + struct drm_gpu_scheduler *sched = s_fence->sched; + + atomic_dec(&sched->hw_rq_count); + atomic_dec(sched->score); + + dma_fence_get(&s_fence->finished); + drm_sched_fence_finished(s_fence); + dma_fence_put(&s_fence->finished); + wake_up_interruptible(&sched->wake_up_worker); +} + +int drm_sched_fence_add_parent_cb(struct dma_fence *fence, + struct drm_sched_fence *s_fence) +{ + return dma_fence_add_callback(fence, &s_fence->cb, + drm_sched_job_done_cb); +} + +bool drm_sched_fence_remove_parent_cb(struct drm_sched_fence *s_fence) +{ + return dma_fence_remove_callback(s_fence->parent, + &s_fence->cb); +} + +/** + * drm_sched_fence_set_parent - set the parent fence and add the callback + * fence: pointer to the hw fence + * @s_fence: pointer to the fence + * + * Set the parent fence and intall the callback for a done job. + */ +int drm_sched_fence_set_parent(struct dma_fence *fence, + struct drm_sched_fence *s_fence) +{ + if (s_fence->parent && + dma_fence_remove_callback(s_fence->parent, &s_fence->cb)) + dma_fence_put(s_fence->parent); + + s_fence->parent = dma_fence_get(fence); + /* Drop for original kref_init of the fence */ + dma_fence_put(fence); + return dma_fence_add_callback(fence, &s_fence->cb, + drm_sched_job_done_cb); +} /** * drm_sched_fence_free - free up an uninitialized fence diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 4cc59bae38dd..cfb52e15f5b0 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -253,13 +253,12 @@ drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq) /** * drm_sched_job_done - complete a job - * @s_job: pointer to the job which is done + * @s_fence: pointer to the fence of a done job * * Finish the job's fence and wake up the worker thread. */ -static void drm_sched_job_done(struct drm_sched_job *s_job) +static void drm_sched_job_done(struct drm_sched_fence *s_fence) { - struct drm_sched_fence *s_fence = s_job->s_fence; struct drm_gpu_scheduler *sched = s_fence->sched; atomic_dec(&sched->hw_rq_count); @@ -273,18 +272,6 @@ static void drm_sched_job_done(struct drm_sched_job *s_job) wake_up_interruptible(&sched->wake_up_worker); } -/** - * drm_sched_job_done_cb - the callback for a done job - * @f: fence - * @cb: fence callbacks - */ -static void drm_sched_job_done_cb(struct dma_fence *f, struct dma_fence_cb *cb) -{ - struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb); - - drm_sched_job_done(s_job); -} - /** * drm_sched_dependency_optimized - test if the dependency can be optimized * @@ -505,8 +492,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) list_for_each_entry_safe_reverse(s_job, tmp, &sched->pending_list, list) { if (s_job->s_fence->parent && - dma_fence_remove_callback(s_job->s_fence->parent, - &s_job->cb)) { + drm_sched_fence_remove_parent_cb(s_job->s_fence)) { dma_fence_put(s_job->s_fence->parent); s_job->s_fence->parent = NULL; atomic_dec(&sched->hw_rq_count); @@ -576,15 +562,14 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) continue; if (fence) { - r = dma_fence_add_callback(fence, &s_job->cb, - drm_sched_job_done_cb); + r = drm_sched_fence_add_parent_cb(fence, s_job->s_fence); if (r == -ENOENT) - drm_sched_job_done(s_job); + drm_sched_job_done(s_job->s_fence); else if (r) DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r); } else - drm_sched_job_done(s_job); + drm_sched_job_done(s_job->s_fence); } if (full_recovery) { @@ -1049,14 +1034,9 @@ static int drm_sched_main(void *param) drm_sched_fence_scheduled(s_fence); if (!IS_ERR_OR_NULL(fence)) { - s_fence->parent = dma_fence_get(fence); - /* Drop for original kref_init of the fence */ - dma_fence_put(fence); - - r = dma_fence_add_callback(fence, &sched_job->cb, - drm_sched_job_done_cb); + r = drm_sched_fence_set_parent(fence, s_fence); if (r == -ENOENT) - drm_sched_job_done(sched_job); + drm_sched_job_done(s_fence); else if (r) DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r); @@ -1064,7 +1044,7 @@ static int drm_sched_main(void *param) if (IS_ERR(fence)) dma_fence_set_error(&s_fence->finished, PTR_ERR(fence)); - drm_sched_job_done(sched_job); + drm_sched_job_done(s_fence); } wake_up(&sched->job_scheduled); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 1f7d9dd1a444..7258e2fa195f 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -281,6 +281,10 @@ struct drm_sched_fence { * @owner: job owner for debugging */ void *owner; + /** + * @cb: callback + */ + struct dma_fence_cb cb; }; struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); @@ -300,7 +304,6 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); * be scheduled further. * @s_priority: the priority of the job. * @entity: the entity to which this job belongs. - * @cb: the callback for the parent fence in s_fence. * * A job is created by the driver using drm_sched_job_init(), and * should call drm_sched_entity_push_job() once it wants the scheduler @@ -325,7 +328,6 @@ struct drm_sched_job { atomic_t karma; enum drm_sched_priority s_priority; struct drm_sched_entity *entity; - struct dma_fence_cb cb; /** * @dependencies: * @@ -559,6 +561,12 @@ void drm_sched_fence_free(struct drm_sched_fence *fence); void drm_sched_fence_scheduled(struct drm_sched_fence *fence); void drm_sched_fence_finished(struct drm_sched_fence *fence); +int drm_sched_fence_add_parent_cb(struct dma_fence *fence, + struct drm_sched_fence *s_fence); +bool drm_sched_fence_remove_parent_cb(struct drm_sched_fence *s_fence); +int drm_sched_fence_set_parent(struct dma_fence *fence, + struct drm_sched_fence *s_fence); + unsigned long drm_sched_suspend_timeout(struct drm_gpu_scheduler *sched); void drm_sched_resume_timeout(struct drm_gpu_scheduler *sched, unsigned long remaining); -- 2.25.1
2 years, 8 months
3
3
0
0
← Newer
1
...
101
102
103
104
105
106
107
...
291
Older →
Jump to page:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
Results per page:
10
25
50
100
200