On 7/15/25 11:33 AM, Johan Hovold wrote:
On Tue, Jul 15, 2025 at 02:41:23PM +0530, Manivannan Sadhasivam wrote:
On Tue, Jul 15, 2025 at 09:48:30AM GMT, Johan Hovold wrote:
On Mon, Jul 14, 2025 at 11:31:04PM +0530, Manivannan Sadhasivam wrote:
Obviously, it is the pwrctrl change that caused regression, but it ultimately uncovered a flaw in the ASPM enablement logic of the controller driver. So to address the actual issue, switch to the bus notifier for enabling ASPM of the PCI devices. The notifier will notify the controller driver when a PCI device is attached to the bus, thereby allowing it to enable ASPM more reliably. It should be noted that the 'pci_dev::link_state', which is required for enabling ASPM by the pci_enable_link_state_locked() API, is only set by the time of BUS_NOTIFY_BIND_DRIVER stage of the notification. So we cannot enable ASPM during BUS_NOTIFY_ADD_DEVICE stage.
A problem with this approach is that ASPM will never be enabled (and power consumption will be higher) in case an endpoint driver is missing.
I'm aware of this limiation. But I don't think we should really worry about that scenario. No one is going to run an OS intentionally with a PCI device and without the relevant driver. If that happens, it might be due to some issue in driver loading or the user is doing it intentionally. Such scenarios are short lived IMO.
There may not even be a driver (yet). A user could plug in whatever device in a free slot. I can also imagine someone wanting to blacklist a driver temporarily for whatever reason.
How would this work on x86? Would the BIOS typically enable ASPM for each EP? Then that's what we should do here too, even if the EP driver happens to be disabled.
Not sure about all x86, but the Intel VMD controller driver surely doesn't care what's on the other end:
drivers/pci/controller/vmd.c : vmd_pm_enable_quirk()
Konrad