Hi,
Will you be interested in purchasing Attendees List of InfoComm 2025 | The Coolest Pro AV Event.
Attendees:
1. AV Integrator / Systems Integrators / IT Integrator
2. Rental / Staging or Live Events Producer / Events Pros
3. Video/Film Production / Technology Managers
4. Consultants / Info COMM Veterans
5. Independent, Reseller / Independent Technical Services
6. Artist/Content / Media Creator/Performer
7. Audio Engineer / Sound Designer
8. Costume/Make-Up/Prop/Scenery/Set Designer Director/Producer
9. Consultant, Content, Creator, Creative Agency, Experience Design Firm
10. Educator / Professor / Trainer
11. Electrician / Engineer / Rigger / Stagehand / Technician
12. Manufacturer / Distributor / Dealer's
13. Promotion/Consulting
14. Others from Audio and Visual Industry
This list may be split by Category and Geography.
What you obtain: Person details, title, company details, website, physical address, direct line, fax number, SIC, industry type, size, emails, and verification results.
Can I send you the pricing breakdown and an example list?
Regards,
Vanessa Coleman, Event Coordinator
Don't want emails from us anymore? Reply to this email with the word "Take out."
Hi folks,
This series fixes several stability issues with the upstream ufs-exynos
driver, specifically for the gs101 SoC found in Pixel 6.
The main fix is regarding the IO cache coherency setting and ensuring
that it is correctly applied depending on if the dma-coherent property
is specified in device tree. This fixes the UFS stability issues on gs101
and I would imagine will also fix issues on exynosauto platform that
seems to have similar iocc shareability bits.
Additionally the phy reference counting is fixed which allows module
load/unload to work reliably and keeps the phy state machine in sync
with the controller glue driver.
regards,
Peter
Changes since v1:
* Added patch for correct handling of iocc depedent on dma-coherent property
* Rebased onto next-20250319
* Add a gs101 specific suspend hook (Bart)
* Drop asserting GPIO_OUT in .exit() (Peter)
* Remove superfluous blank line (Bart)
* Update PRDT_PREFECT_EN to PRDT_PREFETCH_EN (Bart)
* Update commit description for desctype type 3 (Eric)
* https://lore.kernel.org/lkml/20250226220414.343659-1-peter.griffin@linaro.o…
Signed-off-by: Peter Griffin <peter.griffin(a)linaro.org>
---
Peter Griffin (7):
scsi: ufs: exynos: ensure pre_link() executes before exynos_ufs_phy_init()
scsi: ufs: exynos: move ufs shareability value to drvdata
scsi: ufs: exynos: disable iocc if dma-coherent property isn't set
scsi: ufs: exynos: ensure consistent phy reference counts
scsi: ufs: exynos: Enable PRDT pre-fetching with UFSHCD_CAP_CRYPTO
scsi: ufs: exynos: Move phy calls to .exit() callback
scsi: ufs: exynos: gs101: put ufs device in reset on .suspend()
drivers/ufs/host/ufs-exynos.c | 85 ++++++++++++++++++++++++++++++++-----------
drivers/ufs/host/ufs-exynos.h | 6 ++-
2 files changed, 68 insertions(+), 23 deletions(-)
---
base-commit: 433ccb6f2e879866b8601fcb1de14e316cdb0d39
change-id: 20250319-exynos-ufs-stability-fixes-e8da9862e3dc
Best regards,
--
Peter Griffin <peter.griffin(a)linaro.org>
Hello Greg,
This patch applies cleanly to the v5.4 kernel.
dmaengine: ti: edma: Add some null pointer checks to the edma_probe
[upstream commit 6e2276203ac9ff10fc76917ec9813c660f627369]
Nouveau currently relies on the assumption that dma_fences will only
ever get signalled through nouveau_fence_signal(), which takes care of
removing a signalled fence from the list nouveau_fence_chan.pending.
This self-imposed rule is violated in nouveau_fence_done(), where
dma_fence_is_signaled() can signal the fence without removing it from
the list. This enables accesses to already signalled fences through the
list, which is a bug.
Furthermore, it must always be possible to use standard dma_fence
methods an a dma_fence and observe valid behavior. The canonical way of
ensuring that signalling a fence has additional effects is to add those
effects to a callback and register it on that fence.
Move the code from nouveau_fence_signal() into a dma_fence callback.
Register that callback when creating the fence.
Cc: <stable(a)vger.kernel.org> # 4.10+
Signed-off-by: Philipp Stanner <phasta(a)kernel.org>
---
Changes in v2:
- Remove Fixes: tag. (Danilo)
- Remove integer "drop" and call nvif_event_block() in the fence
callback. (Danilo)
---
drivers/gpu/drm/nouveau/nouveau_fence.c | 52 +++++++++++++------------
drivers/gpu/drm/nouveau/nouveau_fence.h | 1 +
2 files changed, 29 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c
index 7cc84472cece..cf510ef9641a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_fence.c
+++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
@@ -50,24 +50,24 @@ nouveau_fctx(struct nouveau_fence *fence)
return container_of(fence->base.lock, struct nouveau_fence_chan, lock);
}
-static int
-nouveau_fence_signal(struct nouveau_fence *fence)
+static void
+nouveau_fence_cleanup_cb(struct dma_fence *dfence, struct dma_fence_cb *cb)
{
- int drop = 0;
+ struct nouveau_fence_chan *fctx;
+ struct nouveau_fence *fence;
+
+ fence = container_of(dfence, struct nouveau_fence, base);
+ fctx = nouveau_fctx(fence);
- dma_fence_signal_locked(&fence->base);
list_del(&fence->head);
rcu_assign_pointer(fence->channel, NULL);
if (test_bit(DMA_FENCE_FLAG_USER_BITS, &fence->base.flags)) {
- struct nouveau_fence_chan *fctx = nouveau_fctx(fence);
-
if (!--fctx->notify_ref)
- drop = 1;
+ nvif_event_block(&fctx->event);
}
dma_fence_put(&fence->base);
- return drop;
}
static struct nouveau_fence *
@@ -93,8 +93,7 @@ nouveau_fence_context_kill(struct nouveau_fence_chan *fctx, int error)
if (error)
dma_fence_set_error(&fence->base, error);
- if (nouveau_fence_signal(fence))
- nvif_event_block(&fctx->event);
+ dma_fence_signal_locked(&fence->base);
}
fctx->killed = 1;
spin_unlock_irqrestore(&fctx->lock, flags);
@@ -127,11 +126,10 @@ nouveau_fence_context_free(struct nouveau_fence_chan *fctx)
kref_put(&fctx->fence_ref, nouveau_fence_context_put);
}
-static int
+static void
nouveau_fence_update(struct nouveau_channel *chan, struct nouveau_fence_chan *fctx)
{
struct nouveau_fence *fence;
- int drop = 0;
u32 seq = fctx->read(chan);
while (!list_empty(&fctx->pending)) {
@@ -140,10 +138,8 @@ nouveau_fence_update(struct nouveau_channel *chan, struct nouveau_fence_chan *fc
if ((int)(seq - fence->base.seqno) < 0)
break;
- drop |= nouveau_fence_signal(fence);
+ dma_fence_signal_locked(&fence->base);
}
-
- return drop;
}
static void
@@ -152,7 +148,6 @@ nouveau_fence_uevent_work(struct work_struct *work)
struct nouveau_fence_chan *fctx = container_of(work, struct nouveau_fence_chan,
uevent_work);
unsigned long flags;
- int drop = 0;
spin_lock_irqsave(&fctx->lock, flags);
if (!list_empty(&fctx->pending)) {
@@ -161,11 +156,8 @@ nouveau_fence_uevent_work(struct work_struct *work)
fence = list_entry(fctx->pending.next, typeof(*fence), head);
chan = rcu_dereference_protected(fence->channel, lockdep_is_held(&fctx->lock));
- if (nouveau_fence_update(chan, fctx))
- drop = 1;
+ nouveau_fence_update(chan, fctx);
}
- if (drop)
- nvif_event_block(&fctx->event);
spin_unlock_irqrestore(&fctx->lock, flags);
}
@@ -235,6 +227,19 @@ nouveau_fence_emit(struct nouveau_fence *fence)
&fctx->lock, fctx->context, ++fctx->sequence);
kref_get(&fctx->fence_ref);
+ fence->cb.func = nouveau_fence_cleanup_cb;
+ /* Adding a callback runs into __dma_fence_enable_signaling(), which will
+ * ultimately run into nouveau_fence_no_signaling(), where a WARN_ON
+ * would fire because the refcount can be dropped there.
+ *
+ * Increment the refcount here temporarily to work around that.
+ */
+ dma_fence_get(&fence->base);
+ ret = dma_fence_add_callback(&fence->base, &fence->cb, nouveau_fence_cleanup_cb);
+ dma_fence_put(&fence->base);
+ if (ret)
+ return ret;
+
ret = fctx->emit(fence);
if (!ret) {
dma_fence_get(&fence->base);
@@ -246,8 +251,7 @@ nouveau_fence_emit(struct nouveau_fence *fence)
return -ENODEV;
}
- if (nouveau_fence_update(chan, fctx))
- nvif_event_block(&fctx->event);
+ nouveau_fence_update(chan, fctx);
list_add_tail(&fence->head, &fctx->pending);
spin_unlock_irq(&fctx->lock);
@@ -270,8 +274,8 @@ nouveau_fence_done(struct nouveau_fence *fence)
spin_lock_irqsave(&fctx->lock, flags);
chan = rcu_dereference_protected(fence->channel, lockdep_is_held(&fctx->lock));
- if (chan && nouveau_fence_update(chan, fctx))
- nvif_event_block(&fctx->event);
+ if (chan)
+ nouveau_fence_update(chan, fctx);
spin_unlock_irqrestore(&fctx->lock, flags);
}
return dma_fence_is_signaled(&fence->base);
diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.h b/drivers/gpu/drm/nouveau/nouveau_fence.h
index 8bc065acfe35..e6b2df7fdc42 100644
--- a/drivers/gpu/drm/nouveau/nouveau_fence.h
+++ b/drivers/gpu/drm/nouveau/nouveau_fence.h
@@ -10,6 +10,7 @@ struct nouveau_bo;
struct nouveau_fence {
struct dma_fence base;
+ struct dma_fence_cb cb;
struct list_head head;
--
2.48.1
Nikolay reports [1] that accessing BIOS data (first 1MB of the physical
address space) via /dev/mem results in an SEPT violation.
The cause is ioremap() (via xlate_dev_mem_ptr()) establishing an
unencrypted mapping where the kernel had established an encrypted
mapping previously.
Teach __ioremap_check_other() that this address space shall always be
mapped as encrypted as historically it is memory resident data, not MMIO
with side-effects.
Cc: <x86(a)kernel.org>
Cc: Vishal Annapurve <vannapurve(a)google.com>
Cc: Kirill Shutemov <kirill.shutemov(a)linux.intel.com>
Reported-by: Nikolay Borisov <nik.borisov(a)suse.com>
Closes: http://lore.kernel.org/20250318113604.297726-1-nik.borisov@suse.com [1]
Tested-by: Nikolay Borisov <nik.borisov(a)suse.com>
Fixes: 9aa6ea69852c ("x86/tdx: Make pages shared in ioremap()")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
---
arch/x86/mm/ioremap.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 42c90b420773..9e81286a631e 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -122,6 +122,10 @@ static void __ioremap_check_other(resource_size_t addr, struct ioremap_desc *des
return;
}
+ /* Ensure BIOS data (see devmem_is_allowed()) is consistently mapped */
+ if (PHYS_PFN(addr) < 256)
+ desc->flags |= IORES_MAP_ENCRYPTED;
+
if (!IS_ENABLED(CONFIG_EFI))
return;