On Mon, Oct 06, 2025 at 02:33:33PM -0500, Bjorn Helgaas wrote:
On Mon, Oct 06, 2025 at 11:32:38AM -0700, Brian Norris wrote:
On Mon, Oct 06, 2025 at 03:52:22PM +0200, Mika Westerberg wrote:
On Fri, Oct 03, 2025 at 03:40:09PM -0700, Brian Norris wrote:
From: Brian Norris briannorris@google.com
When transitioning to D3cold, __pci_set_power_state() will first transition a device to D3hot. If the device was already in D3hot, this will add excess work: (a) read/modify/write PMCSR; and (b) excess delay (pci_dev_d3_sleep()).
How come the device is already in D3hot when __pci_set_power_state() is called? IIRC PCI core will transition the device to low power state so that it passes there the deepest possible state, and at that point the device is still in D0. Then __pci_set_power_state() puts it into D3hot and then turns if the power resource -> D3cold.
What I'm missing here?
Some PCI drivers call pci_set_power_state(..., PCI_D3hot) on their own when preparing for runtime or system suspend, so by the time they hit pci_finish_runtime_suspend(), they're in D3hot. Then, pci_target_state() may still pick a lower state (D3cold).
We might need this change, but maybe this is also an opportunity to remove some of those pci_set_power_state(..., PCI_D3hot) calls from drivers.
Agree. The PCI client drivers should have no business in opting for D3Hot in the suspend path. It should be the other way around, they should opt-out if they want by calling pci_save_state(), but that is also subject to discussion.
- Mani