-----Original Message----- From: Andrew Lunn andrew@lunn.ch Sent: Wednesday, September 8, 2021 9:35 PM To: Machnikowski, Maciej maciej.machnikowski@intel.com Subject: Re: [PATCH net-next 1/2] rtnetlink: Add new RTM_GETEECSTATE message to get SyncE status
The SyncE API considerations starts ~54:00, but basically we need API
for:
- Controlling the lane to pin mapping for clock recovery
- Check the EEC/DPLL state and see what's the source of reference
frequency
(in more advanced deployments)
- control additional input and output pins (GNSS input, external inputs,
recovered
frequency reference)
Now that you have pointed to a datasheet...
- Controlling the lane to pin mapping for clock recovery
So this is a PHY property. That could be Linux driving the PHY, via phylib, drivers/net/phy, or there could be firmware in the MAC driver which hides the PHY and gives you some sort of API to access it.
Yes, that's deployment dependent - in our case we use MAC driver that proxies that.
Check the EEC/DPLL state and see what's the source of reference frequency
Where is the EEC/DPLL implemented? Is it typically also in the PHY? Or some other hardware block?
In most cases it will be an external device, but there are implementations That have a PLL/DPLL inside the PHY (like a Broadcom example). And I don't know how switches implements that.
I just want to make sure we have an API which we can easily delegate to different subsystems, some of it in the PHY driver, maybe some of it somewhere else.
The reasoning behind putting it in the ndo subsystem is because ultimately the netdev receives the ESMC message and is the entity that should know how it is connected to the PHY. That's because it will be very complex to move the mapping between netdevs and PHY lanes/ports to the SW running in the userspace. I believe the right way is to let the netdev driver manage that complexity and forward the request to the PHY subsystem if needed. The logic is: - Receive ESMC message on a netdev - send the message to enable recovered clock to the netdev that received it - ask the netdev to give the status of EEC that drives it.
Also, looking at the Marvell datasheet, it appears these registers are in the MDIO_MMD_VEND2 range. Has any of this been specified? Can we expect to be able to write a generic implementation sometime in the future which PHY drivers can share?
I believe it will be vendor-specific, as there are different implementations of it. It can be an internal PLL that gets all inputs and syncs to a one of them or it can be a MUX with divder(s)
I just looked at a 1G Marvell PHY. It uses RGMII or SGMII towards the host. But there is no indication you can take the clock from the SGMII SERDES, it is only the recovered clock from the line. And the recovered clock always goes out the CLK125 pin, which can either be 125MHz or 25MHz.
So in this case, you have no need to control the lane to pin mapping, it is fixed, but do we want to be able to control the divider?
That's a complex question. Controlling the divider would make sense when we have control over the DPLL, and it still needs to be optional, as it's not always available. I assume that in the initial implementation we can rely on MAC driver to set up the dividers to output the expected frequency to the DPLL. On faster interfaces the RCLK speed is also speed-dependent and differs across the link speeds.
Do we need a mechanism to actually enumerate what the hardware can do?
For recovered clocks I think we need to start with the number of PHY outputs that go towards the DPLL. We can add additional attributes like a frequency and others once we need them. I want to avoid creating a swiss-knife with too many options here :)
Since we are talking about clocks and dividers, and multiplexors, should all this be using the common clock framework, which already supports most of this? Do we actually need something new?
I believe that's a specific enough case that deserves a separate one Regards Maciek