This patch adds support for Samsung GH7 PCIe host controller. Samsung GH7 PCIe controller driver has dependency on the PCI arm64 arch support. So, the Liviu Dudau's patcheset for creating generic host_bridge from device tree [1] and supporting PCI in AArch64 [2] are required.
This patch is marked as RFC, so any comment will be welcomed. Thank you.
[1] http://www.spinics.net/lists/linux-pci/msg29786.html [2] http://www.spinics.net/lists/linux-pci/msg29793.html
Jingoo Han (2) PCI: designware: Add ARM64 PCI support PCI: exynos: Add PCIe support for Samsung GH7 SoC
--- drivers/pci/host/Kconfig | 2 +- drivers/pci/host/pci-exynos.c | 135 +++++++++++++++++++++++++++++++++--- drivers/pci/host/pcie-designware.c | 32 +++++++++ 3 files changed, 158 insertions(+), 11 deletions(-)
Best regards, Jingoo Han
Add ARM64 PCI support for Synopsys designware PCIe, by using pcie arm64 arch support and creating generic pcie bridge from device tree.
Signed-off-by: Jingoo Han jg1.han@samsung.com --- drivers/pci/host/pcie-designware.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+)
diff --git a/drivers/pci/host/pcie-designware.c b/drivers/pci/host/pcie-designware.c index 6d23d8c..fac0440 100644 --- a/drivers/pci/host/pcie-designware.c +++ b/drivers/pci/host/pcie-designware.c @@ -65,14 +65,27 @@ #define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16) #define PCIE_ATU_UPPER_TARGET 0x91C
+#ifdef CONFIG_ARM static struct hw_pci dw_pci; +#endif
static unsigned long global_io_offset;
+#ifdef CONFIG_ARM static inline struct pcie_port *sys_to_pcie(struct pci_sys_data *sys) { return sys->private_data; } +#endif + +#ifdef CONFIG_ARM64 +static inline struct pcie_port *sys_to_pcie(struct pcie_port *pp) +{ + return pp; +} + +static struct pci_ops dw_pcie_ops; +#endif
int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val) { @@ -381,7 +394,9 @@ static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq, { irq_set_chip_and_handler(irq, &dw_msi_irq_chip, handle_simple_irq); irq_set_chip_data(irq, domain->host_data); +#ifdef CONFIG_ARM set_irq_flags(irq, IRQF_VALID); +#endif
return 0; } @@ -397,6 +412,10 @@ int __init dw_pcie_host_init(struct pcie_port *pp) struct of_pci_range_parser parser; u32 val; int i; +#ifdef CONFIG_ARM64 + struct pci_host_bridge *bridge; + resource_size_t lastbus; +#endif
if (of_pci_range_parser_init(&parser, np)) { dev_err(pp->dev, "missing ranges property\n"); @@ -489,6 +508,7 @@ int __init dw_pcie_host_init(struct pcie_port *pp) val |= PORT_LOGIC_SPEED_CHANGE; dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val);
+#ifdef CONFIG_ARM dw_pci.nr_controllers = 1; dw_pci.private_data = (void **)&pp;
@@ -497,6 +517,16 @@ int __init dw_pcie_host_init(struct pcie_port *pp) #ifdef CONFIG_PCI_DOMAINS dw_pci.domain++; #endif +#endif + +#ifdef CONFIG_ARM64 + bridge = of_create_pci_host_bridge(pp->dev, &dw_pcie_ops, pp); + if (IS_ERR_OR_NULL(bridge)) + return PTR_ERR(bridge); + + lastbus = pci_rescan_bus(bridge->bus); + pci_bus_update_busn_res_end(bridge->bus, lastbus); +#endif
return 0; } @@ -695,6 +725,7 @@ static struct pci_ops dw_pcie_ops = { .write = dw_pcie_wr_conf, };
+#ifdef CONFIG_ARM static int dw_pcie_setup(int nr, struct pci_sys_data *sys) { struct pcie_port *pp; @@ -758,6 +789,7 @@ static struct hw_pci dw_pci = { .map_irq = dw_pcie_map_irq, .add_bus = dw_pcie_add_bus, }; +#endif /* CONFIG_ARM */
void dw_pcie_setup_rc(struct pcie_port *pp) {
Jingoo,
Thanks for taking a stab at trying to convert a host bridge driver to use the new generic host bridge code.
I do however have concerns on the direction you took. You have split your driver in two, depending on whether it was CONFIG_ARM or CONFIG_ARM64, even if (with my series) it should be no reason why the host bridge driver should not work on other architectures as well once they are converted.
Also, some of the functions that you use have identical names but different signatures depending on what arch you have selected. This is really bad in my books!
What about creating functions that use my series directly if CONFIG_ARM64 is defined (or any CONFIG_ you want to create for your driver that you select from CONFIG_ARM64) and otherwise implement the CONFIG_ARM version? That way your driver will call only one API without any #ifdef and when arm code gets converted you drop your adaptation functions. Or (better yet), have a stab at converting bios32 (Rob Herring has already provided some hints on how to do it for arch/arm).
To give an example on how things are not going well in your version (not obvious from your patch, but you can see it once you apply it): dw_pcie_host_init() will still carry the handcoded version of DT parsing and that is not guarded against CONFIG_ARM64 being defined, where the parsing will happen again when you call of_create_pci_host_bridge().
Speaking of the handcoded DT parsing of resources: you are using restype == 0 as a way of selecting config space, *and then split the range size into two halves*. From what Jason Gunthorpe and Arnd were saying, config ranges in the DT tree should only be used for ECAM space, so no split is allowed.
Arnd, are you allowing this non-standard use to creep in the bindings?
Best regards, Liviu
On Wed, Apr 16, 2014 at 05:42:56AM +0100, Jingoo Han wrote:
Add ARM64 PCI support for Synopsys designware PCIe, by using pcie arm64 arch support and creating generic pcie bridge from device tree.
Signed-off-by: Jingoo Han jg1.han@samsung.com
drivers/pci/host/pcie-designware.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+)
diff --git a/drivers/pci/host/pcie-designware.c b/drivers/pci/host/pcie-designware.c index 6d23d8c..fac0440 100644 --- a/drivers/pci/host/pcie-designware.c +++ b/drivers/pci/host/pcie-designware.c @@ -65,14 +65,27 @@ #define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16) #define PCIE_ATU_UPPER_TARGET 0x91C +#ifdef CONFIG_ARM static struct hw_pci dw_pci; +#endif static unsigned long global_io_offset; +#ifdef CONFIG_ARM static inline struct pcie_port *sys_to_pcie(struct pci_sys_data *sys) { return sys->private_data; } +#endif
+#ifdef CONFIG_ARM64 +static inline struct pcie_port *sys_to_pcie(struct pcie_port *pp) +{
- return pp;
+}
+static struct pci_ops dw_pcie_ops; +#endif int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val) { @@ -381,7 +394,9 @@ static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq, { irq_set_chip_and_handler(irq, &dw_msi_irq_chip, handle_simple_irq); irq_set_chip_data(irq, domain->host_data); +#ifdef CONFIG_ARM set_irq_flags(irq, IRQF_VALID); +#endif return 0; } @@ -397,6 +412,10 @@ int __init dw_pcie_host_init(struct pcie_port *pp) struct of_pci_range_parser parser; u32 val; int i; +#ifdef CONFIG_ARM64
- struct pci_host_bridge *bridge;
- resource_size_t lastbus;
+#endif if (of_pci_range_parser_init(&parser, np)) { dev_err(pp->dev, "missing ranges property\n"); @@ -489,6 +508,7 @@ int __init dw_pcie_host_init(struct pcie_port *pp) val |= PORT_LOGIC_SPEED_CHANGE; dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val); +#ifdef CONFIG_ARM dw_pci.nr_controllers = 1; dw_pci.private_data = (void **)&pp; @@ -497,6 +517,16 @@ int __init dw_pcie_host_init(struct pcie_port *pp) #ifdef CONFIG_PCI_DOMAINS dw_pci.domain++; #endif +#endif
+#ifdef CONFIG_ARM64
- bridge = of_create_pci_host_bridge(pp->dev, &dw_pcie_ops, pp);
- if (IS_ERR_OR_NULL(bridge))
return PTR_ERR(bridge);
- lastbus = pci_rescan_bus(bridge->bus);
- pci_bus_update_busn_res_end(bridge->bus, lastbus);
+#endif return 0; } @@ -695,6 +725,7 @@ static struct pci_ops dw_pcie_ops = { .write = dw_pcie_wr_conf, }; +#ifdef CONFIG_ARM static int dw_pcie_setup(int nr, struct pci_sys_data *sys) { struct pcie_port *pp; @@ -758,6 +789,7 @@ static struct hw_pci dw_pci = { .map_irq = dw_pcie_map_irq, .add_bus = dw_pcie_add_bus, }; +#endif /* CONFIG_ARM */ void dw_pcie_setup_rc(struct pcie_port *pp) { -- 1.7.10.4
On Wednesday 16 April 2014 17:57:24 Liviu Dudau wrote:
Jingoo,
Thanks for taking a stab at trying to convert a host bridge driver to use the new generic host bridge code.
I do however have concerns on the direction you took. You have split your driver in two, depending on whether it was CONFIG_ARM or CONFIG_ARM64, even if (with my series) it should be no reason why the host bridge driver should not work on other architectures as well once they are converted.
Right.
Also, some of the functions that you use have identical names but different signatures depending on what arch you have selected. This is really bad in my books!
It's only the sys_to_pcie() function, right?
You can probably simplify that to take a void pointer and have only one line difference.
What about creating functions that use my series directly if CONFIG_ARM64 is defined (or any CONFIG_ you want to create for your driver that you select from CONFIG_ARM64) and otherwise implement the CONFIG_ARM version? That way your driver will call only one API without any #ifdef and when arm code gets converted you drop your adaptation functions. Or (better yet), have a stab at converting bios32 (Rob Herring has already provided some hints on how to do it for arch/arm).
That would of course be best.
To give an example on how things are not going well in your version (not obvious from your patch, but you can see it once you apply it): dw_pcie_host_init() will still carry the handcoded version of DT parsing and that is not guarded against CONFIG_ARM64 being defined, where the parsing will happen again when you call of_create_pci_host_bridge().
How about making the generic DT parsing code from of_create_pci_host_bridge() an exported function that can be called by drivers that don't use of_create_pci_host_bridge?
Speaking of the handcoded DT parsing of resources: you are using restype == 0 as a way of selecting config space, *and then split the range size into two halves*. From what Jason Gunthorpe and Arnd were saying, config ranges in the DT tree should only be used for ECAM space, so no split is allowed.
Arnd, are you allowing this non-standard use to creep in the bindings?
I fear it's too late to change that now. In retrospect we probably shoulnd't have defined the binding like that.
Overall, my impression of the patch is that it should be possible to do the same with much fewer #ifdefs by first rearranging the code in one patch and then doing another patch on top to add the generic arm64 support.
diff --git a/drivers/pci/host/pcie-designware.c b/drivers/pci/host/pcie-designware.c index 6d23d8c..fac0440 100644 --- a/drivers/pci/host/pcie-designware.c +++ b/drivers/pci/host/pcie-designware.c @@ -65,14 +65,27 @@ #define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16) #define PCIE_ATU_UPPER_TARGET 0x91C +#ifdef CONFIG_ARM static struct hw_pci dw_pci; +#endif static unsigned long global_io_offset; +#ifdef CONFIG_ARM static inline struct pcie_port *sys_to_pcie(struct pci_sys_data *sys) { return sys->private_data; } +#endif
+#ifdef CONFIG_ARM64 +static inline struct pcie_port *sys_to_pcie(struct pcie_port *pp) +{
- return pp;
+}
+static struct pci_ops dw_pcie_ops; +#endif int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val) { @@ -381,7 +394,9 @@ static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq, { irq_set_chip_and_handler(irq, &dw_msi_irq_chip, handle_simple_irq); irq_set_chip_data(irq, domain->host_data); +#ifdef CONFIG_ARM set_irq_flags(irq, IRQF_VALID); +#endif return 0; } @@ -397,6 +412,10 @@ int __init dw_pcie_host_init(struct pcie_port *pp) struct of_pci_range_parser parser; u32 val; int i; +#ifdef CONFIG_ARM64
- struct pci_host_bridge *bridge;
- resource_size_t lastbus;
+#endif if (of_pci_range_parser_init(&parser, np)) { dev_err(pp->dev, "missing ranges property\n"); @@ -489,6 +508,7 @@ int __init dw_pcie_host_init(struct pcie_port *pp) val |= PORT_LOGIC_SPEED_CHANGE; dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val); +#ifdef CONFIG_ARM dw_pci.nr_controllers = 1; dw_pci.private_data = (void **)&pp; @@ -497,6 +517,16 @@ int __init dw_pcie_host_init(struct pcie_port *pp) #ifdef CONFIG_PCI_DOMAINS dw_pci.domain++; #endif +#endif
+#ifdef CONFIG_ARM64
- bridge = of_create_pci_host_bridge(pp->dev, &dw_pcie_ops, pp);
- if (IS_ERR_OR_NULL(bridge))
return PTR_ERR(bridge);
- lastbus = pci_rescan_bus(bridge->bus);
- pci_bus_update_busn_res_end(bridge->bus, lastbus);
+#endif return 0; } @@ -695,6 +725,7 @@ static struct pci_ops dw_pcie_ops = { .write = dw_pcie_wr_conf, }; +#ifdef CONFIG_ARM static int dw_pcie_setup(int nr, struct pci_sys_data *sys) { struct pcie_port *pp; @@ -758,6 +789,7 @@ static struct hw_pci dw_pci = { .map_irq = dw_pcie_map_irq, .add_bus = dw_pcie_add_bus, }; +#endif /* CONFIG_ARM */ void dw_pcie_setup_rc(struct pcie_port *pp)
On Thursday, April 17, 2014 3:26 AM, Arnd Bergmann wrote:
(+cc Mohit KUMAR, Pratyush Anand, Marek Vasut, Richard Zhu, Kishon Vijay Abraham I, Byungho An)
On Wednesday 16 April 2014 17:57:24 Liviu Dudau wrote:
Jingoo,
Thanks for taking a stab at trying to convert a host bridge driver to use the new generic host bridge code.
I do however have concerns on the direction you took. You have split your driver in two, depending on whether it was CONFIG_ARM or CONFIG_ARM64, even if (with my series) it should be no reason why the host bridge driver should not work on other architectures as well once they are converted.
Right.
Also, some of the functions that you use have identical names but different signatures depending on what arch you have selected. This is really bad in my books!
It's only the sys_to_pcie() function, right?
You can probably simplify that to take a void pointer and have only one line difference.
Do you mean the following? Would you give me more detailed advice?
static inline struct pcie_port *sys_to_pcie(void *sys) { struct pcie_port *pp
#ifdef CONFIG_ARM pp = ((struct pci_sys_data *)sys)->private_data; #else pp = (struct pcie_port *)sys; #endif return pp; }
What about creating functions that use my series directly if CONFIG_ARM64 is defined (or any CONFIG_ you want to create for your driver that you select from CONFIG_ARM64) and otherwise implement the CONFIG_ARM version? That way your driver will call only one API without any #ifdef and when arm code gets converted you drop your adaptation functions. Or (better yet), have a stab at converting bios32 (Rob Herring has already provided some hints on how to do it for arch/arm).
To: Liviu Dudau
Sorry, but I will not implement this. At first, you had to think the compatibility with ARM32 PCIe. Why do you want other engineers to take this load?
That would of course be best.
To give an example on how things are not going well in your version (not obvious from your patch, but you can see it once you apply it): dw_pcie_host_init() will still carry the handcoded version of DT parsing and that is not guarded against CONFIG_ARM64 being defined, where the parsing will happen again when you call of_create_pci_host_bridge().
How about making the generic DT parsing code from of_create_pci_host_bridge() an exported function that can be called by drivers that don't use of_create_pci_host_bridge?
Do you mean that of_create_pci_host_bridge() should be used for both ARM32 and ARM64 cases?
Speaking of the handcoded DT parsing of resources: you are using restype == 0 as a way of selecting config space, *and then split the range size into two halves*. From what Jason Gunthorpe and Arnd were saying, config ranges in the DT tree should only be used for ECAM space, so no split is allowed.
Arnd, are you allowing this non-standard use to creep in the bindings?
I fear it's too late to change that now. In retrospect we probably shoulnd't have defined the binding like that.
Overall, my impression of the patch is that it should be possible to do the same with much fewer #ifdefs by first rearranging the code in one patch and then doing another patch on top to add the generic arm64 support.
I will try to reduce #ifdefs as possible.
Best regards, Jingoo Han
diff --git a/drivers/pci/host/pcie-designware.c b/drivers/pci/host/pcie-designware.c index 6d23d8c..fac0440 100644 --- a/drivers/pci/host/pcie-designware.c +++ b/drivers/pci/host/pcie-designware.c @@ -65,14 +65,27 @@ #define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16) #define PCIE_ATU_UPPER_TARGET 0x91C
+#ifdef CONFIG_ARM static struct hw_pci dw_pci; +#endif
static unsigned long global_io_offset;
+#ifdef CONFIG_ARM static inline struct pcie_port *sys_to_pcie(struct pci_sys_data *sys) { return sys->private_data; } +#endif
+#ifdef CONFIG_ARM64 +static inline struct pcie_port *sys_to_pcie(struct pcie_port *pp) +{
- return pp;
+}
+static struct pci_ops dw_pcie_ops; +#endif
int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val) { @@ -381,7 +394,9 @@ static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq, { irq_set_chip_and_handler(irq, &dw_msi_irq_chip, handle_simple_irq); irq_set_chip_data(irq, domain->host_data); +#ifdef CONFIG_ARM set_irq_flags(irq, IRQF_VALID); +#endif
return 0; } @@ -397,6 +412,10 @@ int __init dw_pcie_host_init(struct pcie_port *pp) struct of_pci_range_parser parser; u32 val; int i; +#ifdef CONFIG_ARM64
- struct pci_host_bridge *bridge;
- resource_size_t lastbus;
+#endif
if (of_pci_range_parser_init(&parser, np)) { dev_err(pp->dev, "missing ranges property\n"); @@ -489,6 +508,7 @@ int __init dw_pcie_host_init(struct pcie_port *pp) val |= PORT_LOGIC_SPEED_CHANGE; dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val);
+#ifdef CONFIG_ARM dw_pci.nr_controllers = 1; dw_pci.private_data = (void **)&pp;
@@ -497,6 +517,16 @@ int __init dw_pcie_host_init(struct pcie_port *pp) #ifdef CONFIG_PCI_DOMAINS dw_pci.domain++; #endif +#endif
+#ifdef CONFIG_ARM64
- bridge = of_create_pci_host_bridge(pp->dev, &dw_pcie_ops, pp);
- if (IS_ERR_OR_NULL(bridge))
return PTR_ERR(bridge);
- lastbus = pci_rescan_bus(bridge->bus);
- pci_bus_update_busn_res_end(bridge->bus, lastbus);
+#endif
return 0; } @@ -695,6 +725,7 @@ static struct pci_ops dw_pcie_ops = { .write = dw_pcie_wr_conf, };
+#ifdef CONFIG_ARM static int dw_pcie_setup(int nr, struct pci_sys_data *sys) { struct pcie_port *pp; @@ -758,6 +789,7 @@ static struct hw_pci dw_pci = { .map_irq = dw_pcie_map_irq, .add_bus = dw_pcie_add_bus, }; +#endif /* CONFIG_ARM */
void dw_pcie_setup_rc(struct pcie_port *pp)
On Monday, April 21, 2014 10:54 AM, Jingoo Han wrote:
On Thursday, April 17, 2014 3:26 AM, Arnd Bergmann wrote:
On Wednesday 16 April 2014 17:57:24 Liviu Dudau wrote:
Jingoo,
Thanks for taking a stab at trying to convert a host bridge driver to use the new generic host bridge code.
I do however have concerns on the direction you took. You have split your driver in two, depending on whether it was CONFIG_ARM or CONFIG_ARM64, even if (with my series) it should be no reason why the host bridge driver should not work on other architectures as well once they are converted.
Right.
Also, some of the functions that you use have identical names but different signatures depending on what arch you have selected. This is really bad in my books!
[.....]
What about creating functions that use my series directly if CONFIG_ARM64 is defined (or any CONFIG_ you want to create for your driver that you select from CONFIG_ARM64) and otherwise implement the CONFIG_ARM version? That way your driver will call only one API without any #ifdef and when arm code gets converted you drop your adaptation functions. Or (better yet), have a stab at converting bios32 (Rob Herring has already provided some hints on how to do it for arch/arm).
To: Liviu Dudau
Sorry, but I will not implement this. At first, you had to think the compatibility with ARM32 PCIe. Why do you want other engineers to take this load?
(+cc Rob Herring)
Um, I am looking at Rob Herring's patchset for Versatile PCI. [1] Then, do you mean the following?
1. Add Rob Herring's patch converting bios32. [2] 2. Reference Rob Herring's patch in order to know how to handle "of_create_pci_host_bridge()" directly in ARM32. [3] 3. Use of_create_pci_host_bridge() for the designware PCIe driver in ARM32. 4. Also, use of_create_pci_host_bridge() for the designware PCIe driver in "ARM64".
[1] http://www.spinics.net/lists/linux-pci/msg30084.html [2] http://www.spinics.net/lists/linux-pci/msg30083.html [3] http://www.spinics.net/lists/linux-pci/msg30086.html
Best regards, Jingoo Han
That would of course be best.
[.....]
On Mon, Apr 21, 2014 at 10:58:52AM +0100, Jingoo Han wrote:
On Monday, April 21, 2014 10:54 AM, Jingoo Han wrote:
On Thursday, April 17, 2014 3:26 AM, Arnd Bergmann wrote:
On Wednesday 16 April 2014 17:57:24 Liviu Dudau wrote:
Jingoo,
Thanks for taking a stab at trying to convert a host bridge driver to use the new generic host bridge code.
I do however have concerns on the direction you took. You have split your driver in two, depending on whether it was CONFIG_ARM or CONFIG_ARM64, even if (with my series) it should be no reason why the host bridge driver should not work on other architectures as well once they are converted.
Right.
Also, some of the functions that you use have identical names but different signatures depending on what arch you have selected. This is really bad in my books!
[.....]
What about creating functions that use my series directly if CONFIG_ARM64 is defined (or any CONFIG_ you want to create for your driver that you select from CONFIG_ARM64) and otherwise implement the CONFIG_ARM version? That way your driver will call only one API without any #ifdef and when arm code gets converted you drop your adaptation functions. Or (better yet), have a stab at converting bios32 (Rob Herring has already provided some hints on how to do it for arch/arm).
To: Liviu Dudau
Sorry, but I will not implement this. At first, you had to think the compatibility with ARM32 PCIe. Why do you want other engineers to take this load?
(+cc Rob Herring)
Um, I am looking at Rob Herring's patchset for Versatile PCI. [1] Then, do you mean the following?
- Add Rob Herring's patch converting bios32. [2]
- Reference Rob Herring's patch in order to know how to handle "of_create_pci_host_bridge()" directly in ARM32. [3]
- Use of_create_pci_host_bridge() for the designware PCIe driver in ARM32.
- Also, use of_create_pci_host_bridge() for the designware PCIe driver in "ARM64".
Sounds like a good plan. 3 and 4 should be one and the same in my opinion, but there might be more things in designware driver that I am missing right now.
Best regards, Liviu
[1] http://www.spinics.net/lists/linux-pci/msg30084.html [2] http://www.spinics.net/lists/linux-pci/msg30083.html [3] http://www.spinics.net/lists/linux-pci/msg30086.html
Best regards, Jingoo Han
That would of course be best.
[.....]
On Mon, Apr 21, 2014 at 4:58 AM, Jingoo Han jg1.han@samsung.com wrote:
On Monday, April 21, 2014 10:54 AM, Jingoo Han wrote:
On Thursday, April 17, 2014 3:26 AM, Arnd Bergmann wrote:
On Wednesday 16 April 2014 17:57:24 Liviu Dudau wrote:
Jingoo,
Thanks for taking a stab at trying to convert a host bridge driver to use the new generic host bridge code.
I do however have concerns on the direction you took. You have split your driver in two, depending on whether it was CONFIG_ARM or CONFIG_ARM64, even if (with my series) it should be no reason why the host bridge driver should not work on other architectures as well once they are converted.
Right.
Also, some of the functions that you use have identical names but different signatures depending on what arch you have selected. This is really bad in my books!
[.....]
What about creating functions that use my series directly if CONFIG_ARM64 is defined (or any CONFIG_ you want to create for your driver that you select from CONFIG_ARM64) and otherwise implement the CONFIG_ARM version? That way your driver will call only one API without any #ifdef and when arm code gets converted you drop your adaptation functions. Or (better yet), have a stab at converting bios32 (Rob Herring has already provided some hints on how to do it for arch/arm).
To: Liviu Dudau
Sorry, but I will not implement this.
Then you will not get what you want upstream.
At first, you had to think the compatibility with ARM32 PCIe. Why do you want other engineers to take this load?
That is how the process works.
(+cc Rob Herring)
Um, I am looking at Rob Herring's patchset for Versatile PCI. [1] Then, do you mean the following?
- Add Rob Herring's patch converting bios32. [2]
I would expect all or most of this patch to go away in Liviu's next version. It does at least show we don't have to fix all of ARM PCI support to use of_create_pci_host_bridge on ARM.
Rob
On Mon, Apr 21, 2014 at 02:54:12AM +0100, Jingoo Han wrote:
On Thursday, April 17, 2014 3:26 AM, Arnd Bergmann wrote:
(+cc Mohit KUMAR, Pratyush Anand, Marek Vasut, Richard Zhu, Kishon Vijay Abraham I, Byungho An)
On Wednesday 16 April 2014 17:57:24 Liviu Dudau wrote:
Jingoo,
Thanks for taking a stab at trying to convert a host bridge driver to use the new generic host bridge code.
I do however have concerns on the direction you took. You have split your driver in two, depending on whether it was CONFIG_ARM or CONFIG_ARM64, even if (with my series) it should be no reason why the host bridge driver should not work on other architectures as well once they are converted.
Right.
Also, some of the functions that you use have identical names but different signatures depending on what arch you have selected. This is really bad in my books!
It's only the sys_to_pcie() function, right?
You can probably simplify that to take a void pointer and have only one line difference.
Do you mean the following? Would you give me more detailed advice?
static inline struct pcie_port *sys_to_pcie(void *sys) { struct pcie_port *pp
#ifdef CONFIG_ARM pp = ((struct pci_sys_data *)sys)->private_data; #else pp = (struct pcie_port *)sys; #endif return pp; }
Yes, that would be better.
What about creating functions that use my series directly if CONFIG_ARM64 is defined (or any CONFIG_ you want to create for your driver that you select from CONFIG_ARM64) and otherwise implement the CONFIG_ARM version? That way your driver will call only one API without any #ifdef and when arm code gets converted you drop your adaptation functions. Or (better yet), have a stab at converting bios32 (Rob Herring has already provided some hints on how to do it for arch/arm).
To: Liviu Dudau
Sorry, but I will not implement this. At first, you had to think the compatibility with ARM32 PCIe.
I did. I've also looked at the other implementations and have realised how unique and baroque the arm32 version is. If I would've implemented that version for arm64 (Will Deacon had a stab internally) I would have to remove all the quirks from the arch code and all that is left is something very similar to what you currently have. But that would've made that code only usable by ARM platforms and no one else. Why being so narrow?
Why do you want other engineers to take this load?
I don't want to, but you are the first scratching that itch. This is your path to glory!
That would of course be best.
To give an example on how things are not going well in your version (not obvious from your patch, but you can see it once you apply it): dw_pcie_host_init() will still carry the handcoded version of DT parsing and that is not guarded against CONFIG_ARM64 being defined, where the parsing will happen again when you call of_create_pci_host_bridge().
How about making the generic DT parsing code from of_create_pci_host_bridge() an exported function that can be called by drivers that don't use of_create_pci_host_bridge?
Do you mean that of_create_pci_host_bridge() should be used for both ARM32 and ARM64 cases?
Yes, ultimately.
Speaking of the handcoded DT parsing of resources: you are using restype == 0 as a way of selecting config space, *and then split the range size into two halves*. From what Jason Gunthorpe and Arnd were saying, config ranges in the DT tree should only be used for ECAM space, so no split is allowed.
Arnd, are you allowing this non-standard use to creep in the bindings?
I fear it's too late to change that now. In retrospect we probably shoulnd't have defined the binding like that.
Overall, my impression of the patch is that it should be possible to do the same with much fewer #ifdefs by first rearranging the code in one patch and then doing another patch on top to add the generic arm64 support.
I will try to reduce #ifdefs as possible.
Thanks!
Best regards, Liviu
Best regards, Jingoo Han
diff --git a/drivers/pci/host/pcie-designware.c b/drivers/pci/host/pcie-designware.c index 6d23d8c..fac0440 100644 --- a/drivers/pci/host/pcie-designware.c +++ b/drivers/pci/host/pcie-designware.c @@ -65,14 +65,27 @@ #define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16) #define PCIE_ATU_UPPER_TARGET 0x91C
+#ifdef CONFIG_ARM static struct hw_pci dw_pci; +#endif
static unsigned long global_io_offset;
+#ifdef CONFIG_ARM static inline struct pcie_port *sys_to_pcie(struct pci_sys_data *sys) { return sys->private_data; } +#endif
+#ifdef CONFIG_ARM64 +static inline struct pcie_port *sys_to_pcie(struct pcie_port *pp) +{
- return pp;
+}
+static struct pci_ops dw_pcie_ops; +#endif
int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val) { @@ -381,7 +394,9 @@ static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq, { irq_set_chip_and_handler(irq, &dw_msi_irq_chip, handle_simple_irq); irq_set_chip_data(irq, domain->host_data); +#ifdef CONFIG_ARM set_irq_flags(irq, IRQF_VALID); +#endif
return 0; } @@ -397,6 +412,10 @@ int __init dw_pcie_host_init(struct pcie_port *pp) struct of_pci_range_parser parser; u32 val; int i; +#ifdef CONFIG_ARM64
- struct pci_host_bridge *bridge;
- resource_size_t lastbus;
+#endif
if (of_pci_range_parser_init(&parser, np)) { dev_err(pp->dev, "missing ranges property\n"); @@ -489,6 +508,7 @@ int __init dw_pcie_host_init(struct pcie_port *pp) val |= PORT_LOGIC_SPEED_CHANGE; dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val);
+#ifdef CONFIG_ARM dw_pci.nr_controllers = 1; dw_pci.private_data = (void **)&pp;
@@ -497,6 +517,16 @@ int __init dw_pcie_host_init(struct pcie_port *pp) #ifdef CONFIG_PCI_DOMAINS dw_pci.domain++; #endif +#endif
+#ifdef CONFIG_ARM64
- bridge = of_create_pci_host_bridge(pp->dev, &dw_pcie_ops, pp);
- if (IS_ERR_OR_NULL(bridge))
return PTR_ERR(bridge);
- lastbus = pci_rescan_bus(bridge->bus);
- pci_bus_update_busn_res_end(bridge->bus, lastbus);
+#endif
return 0; } @@ -695,6 +725,7 @@ static struct pci_ops dw_pcie_ops = { .write = dw_pcie_wr_conf, };
+#ifdef CONFIG_ARM static int dw_pcie_setup(int nr, struct pci_sys_data *sys) { struct pcie_port *pp; @@ -758,6 +789,7 @@ static struct hw_pci dw_pci = { .map_irq = dw_pcie_map_irq, .add_bus = dw_pcie_add_bus, }; +#endif /* CONFIG_ARM */
void dw_pcie_setup_rc(struct pcie_port *pp)
On Wed, Apr 16, 2014 at 07:26:15PM +0100, Arnd Bergmann wrote:
On Wednesday 16 April 2014 17:57:24 Liviu Dudau wrote:
Jingoo,
Thanks for taking a stab at trying to convert a host bridge driver to use the new generic host bridge code.
I do however have concerns on the direction you took. You have split your driver in two, depending on whether it was CONFIG_ARM or CONFIG_ARM64, even if (with my series) it should be no reason why the host bridge driver should not work on other architectures as well once they are converted.
Right.
Also, some of the functions that you use have identical names but different signatures depending on what arch you have selected. This is really bad in my books!
It's only the sys_to_pcie() function, right?
Right.
You can probably simplify that to take a void pointer and have only one line difference.
What about creating functions that use my series directly if CONFIG_ARM64 is defined (or any CONFIG_ you want to create for your driver that you select from CONFIG_ARM64) and otherwise implement the CONFIG_ARM version? That way your driver will call only one API without any #ifdef and when arm code gets converted you drop your adaptation functions. Or (better yet), have a stab at converting bios32 (Rob Herring has already provided some hints on how to do it for arch/arm).
That would of course be best.
To give an example on how things are not going well in your version (not obvious from your patch, but you can see it once you apply it): dw_pcie_host_init() will still carry the handcoded version of DT parsing and that is not guarded against CONFIG_ARM64 being defined, where the parsing will happen again when you call of_create_pci_host_bridge().
How about making the generic DT parsing code from of_create_pci_host_bridge() an exported function that can be called by drivers that don't use of_create_pci_host_bridge?
Yes, I guess that should ease the transition to the new code even if the function should not be needed for the users of the framework.
Speaking of the handcoded DT parsing of resources: you are using restype == 0 as a way of selecting config space, *and then split the range size into two halves*. From what Jason Gunthorpe and Arnd were saying, config ranges in the DT tree should only be used for ECAM space, so no split is allowed.
Arnd, are you allowing this non-standard use to creep in the bindings?
I fear it's too late to change that now. In retrospect we probably shoulnd't have defined the binding like that.
OK, but I'm going to NAK the driver using the same tricks for arm64. Jingoo will need to come up with an upgrade path where he has the config space described in the reg property and parses that if CONFIG_ARM64 is enabled.
Overall, my impression of the patch is that it should be possible to do the same with much fewer #ifdefs by first rearranging the code in one patch and then doing another patch on top to add the generic arm64 support.
Agree.
Best regards, Liviu
diff --git a/drivers/pci/host/pcie-designware.c b/drivers/pci/host/pcie-designware.c index 6d23d8c..fac0440 100644 --- a/drivers/pci/host/pcie-designware.c +++ b/drivers/pci/host/pcie-designware.c @@ -65,14 +65,27 @@ #define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16) #define PCIE_ATU_UPPER_TARGET 0x91C +#ifdef CONFIG_ARM static struct hw_pci dw_pci; +#endif static unsigned long global_io_offset; +#ifdef CONFIG_ARM static inline struct pcie_port *sys_to_pcie(struct pci_sys_data *sys) { return sys->private_data; } +#endif
+#ifdef CONFIG_ARM64 +static inline struct pcie_port *sys_to_pcie(struct pcie_port *pp) +{
- return pp;
+}
+static struct pci_ops dw_pcie_ops; +#endif int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val) { @@ -381,7 +394,9 @@ static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq, { irq_set_chip_and_handler(irq, &dw_msi_irq_chip, handle_simple_irq); irq_set_chip_data(irq, domain->host_data); +#ifdef CONFIG_ARM set_irq_flags(irq, IRQF_VALID); +#endif return 0; } @@ -397,6 +412,10 @@ int __init dw_pcie_host_init(struct pcie_port *pp) struct of_pci_range_parser parser; u32 val; int i; +#ifdef CONFIG_ARM64
- struct pci_host_bridge *bridge;
- resource_size_t lastbus;
+#endif if (of_pci_range_parser_init(&parser, np)) { dev_err(pp->dev, "missing ranges property\n"); @@ -489,6 +508,7 @@ int __init dw_pcie_host_init(struct pcie_port *pp) val |= PORT_LOGIC_SPEED_CHANGE; dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val); +#ifdef CONFIG_ARM dw_pci.nr_controllers = 1; dw_pci.private_data = (void **)&pp; @@ -497,6 +517,16 @@ int __init dw_pcie_host_init(struct pcie_port *pp) #ifdef CONFIG_PCI_DOMAINS dw_pci.domain++; #endif +#endif
+#ifdef CONFIG_ARM64
- bridge = of_create_pci_host_bridge(pp->dev, &dw_pcie_ops, pp);
- if (IS_ERR_OR_NULL(bridge))
return PTR_ERR(bridge);
- lastbus = pci_rescan_bus(bridge->bus);
- pci_bus_update_busn_res_end(bridge->bus, lastbus);
+#endif return 0; } @@ -695,6 +725,7 @@ static struct pci_ops dw_pcie_ops = { .write = dw_pcie_wr_conf, }; +#ifdef CONFIG_ARM static int dw_pcie_setup(int nr, struct pci_sys_data *sys) { struct pcie_port *pp; @@ -758,6 +789,7 @@ static struct hw_pci dw_pci = { .map_irq = dw_pcie_map_irq, .add_bus = dw_pcie_add_bus, }; +#endif /* CONFIG_ARM */ void dw_pcie_setup_rc(struct pcie_port *pp)
Samsung GH7 has four PCIe controllers which can be used as root complex for PCIe interface.
Signed-off-by: Jingoo Han jg1.han@samsung.com --- drivers/pci/host/Kconfig | 2 +- drivers/pci/host/pci-exynos.c | 135 ++++++++++++++++++++++++++++++++++++++--- 2 files changed, 126 insertions(+), 11 deletions(-)
diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig index a6f67ec..3be047c 100644 --- a/drivers/pci/host/Kconfig +++ b/drivers/pci/host/Kconfig @@ -11,7 +11,7 @@ config PCIE_DW
config PCI_EXYNOS bool "Samsung Exynos PCIe controller" - depends on SOC_EXYNOS5440 + depends on SOC_EXYNOS5440 || ARCH_GH7 select PCIEPORTBUS select PCIE_DW
diff --git a/drivers/pci/host/pci-exynos.c b/drivers/pci/host/pci-exynos.c index 3de6bfb..6e845ca 100644 --- a/drivers/pci/host/pci-exynos.c +++ b/drivers/pci/host/pci-exynos.c @@ -57,6 +57,8 @@ struct exynos_pcie { #define PCIE_NONSTICKY_RESET 0x024 #define PCIE_APP_INIT_RESET 0x028 #define PCIE_APP_LTSSM_ENABLE 0x02c +#define PCIE_SYS_AUX_PWR_DET 0x038 +#define PCIE_SYS_AUX_PWR_ENABLE (0x1 << 0) #define PCIE_ELBI_RDLH_LINKUP 0x064 #define PCIE_ELBI_LTSSM_ENABLE 0x1 #define PCIE_ELBI_SLV_AWMISC 0x11c @@ -72,6 +74,23 @@ struct exynos_pcie { #define PCIE_PHY_TRSVREG_RESET 0x020 #define PCIE_PHY_TRSV_RESET 0x024
+/* PCIe PHY glue registers */ +#define PCIE_PHY_GLUE_REG0 0x000 +#define PCIE_PHY_GLUE_GLOBAL_RESET (0x1 << 0) +#define PCIE_PHY_GLUE_COMMON_RESET (0x1 << 1) +#define PCIE_PHY_GLUE_MAC_RESET (0x1 << 11) +#define PCIE_PHY_GLUE_REG2 0x008 +#define PCIE_PHY_GLUE_CLK100M_DS_MAX (0x7 << 0) +#define PCIE_PHY_GLUE_CLK100M_RFCLK (0x1 << 3) +#define PCIE_PHY_GLUE_CLK100M_ENABLE (0x1 << 4) +#define PCIE_PHY_GLUE_PLL_BUF_ENABLE (0x1 << 8) +#define PCIE_PHY_GLUE_PLL_DIV_ENABLE (0x1 << 9) +#define PCIE_PHY_GLUE_REFCLK_IN(x) (((x) & 0xf) << 10) +#define PCIE_PHY_GLUE_REG3 0x00c +#define PCIE_PHY_GLUE_REF_RATE_100MHZ (0x2 << 0) +#define PCIE_PHY_GLUE_REG4 0x010 +#define PCIE_PHY_GLUE_MODE_PCIE (0x0 << 0) + /* PCIe PHY registers */ #define PCIE_PHY_IMPEDANCE 0x004 #define PCIE_PHY_PLL_DIV_0 0x008 @@ -164,11 +183,45 @@ static void exynos_pcie_sideband_dbi_r_mode(struct pcie_port *pp, bool on) } }
+static void exynos_pcie_set_phy_mode(struct pcie_port *pp) +{ + u32 val; + struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp); + + if (!of_device_is_compatible(pp->dev->of_node, "samsung,gh7-pcie")) + return; + + /* configure phy reference clock setting */ + val = exynos_blk_readl(exynos_pcie, PCIE_PHY_GLUE_REG2); + val |= PCIE_PHY_GLUE_CLK100M_ENABLE | PCIE_PHY_GLUE_CLK100M_RFCLK | + PCIE_PHY_GLUE_CLK100M_DS_MAX; + exynos_blk_writel(exynos_pcie, val, PCIE_PHY_GLUE_REG2); + + val = exynos_blk_readl(exynos_pcie, PCIE_PHY_GLUE_REG2); + val |= PCIE_PHY_GLUE_PLL_DIV_ENABLE | PCIE_PHY_GLUE_PLL_BUF_ENABLE; + exynos_blk_writel(exynos_pcie, val, PCIE_PHY_GLUE_REG2); + + val = exynos_blk_readl(exynos_pcie, PCIE_PHY_GLUE_REG2); + val |= PCIE_PHY_GLUE_REFCLK_IN(6); + exynos_blk_writel(exynos_pcie, val, PCIE_PHY_GLUE_REG2); + + /* set phy reference clock */ + exynos_blk_writel(exynos_pcie, PCIE_PHY_GLUE_REF_RATE_100MHZ, + PCIE_PHY_GLUE_REG3); + + /* set phy mode to pcie mode */ + exynos_blk_writel(exynos_pcie, PCIE_PHY_GLUE_MODE_PCIE, + PCIE_PHY_GLUE_REG4); +} + static void exynos_pcie_assert_core_reset(struct pcie_port *pp) { u32 val; struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
+ if (of_device_is_compatible(pp->dev->of_node, "samsung,gh7-pcie")) + return; + val = exynos_elb_readl(exynos_pcie, PCIE_CORE_RESET); val &= ~PCIE_CORE_RESET_ENABLE; exynos_elb_writel(exynos_pcie, val, PCIE_CORE_RESET); @@ -190,27 +243,48 @@ static void exynos_pcie_deassert_core_reset(struct pcie_port *pp) exynos_elb_writel(exynos_pcie, 1, PCIE_NONSTICKY_RESET); exynos_elb_writel(exynos_pcie, 1, PCIE_APP_INIT_RESET); exynos_elb_writel(exynos_pcie, 0, PCIE_APP_INIT_RESET); - exynos_blk_writel(exynos_pcie, 1, PCIE_PHY_MAC_RESET); + + if (of_device_is_compatible(pp->dev->of_node, "samsung,gh7-pcie")) { + val = exynos_blk_readl(exynos_pcie, PCIE_PHY_GLUE_REG0); + val |= PCIE_PHY_GLUE_MAC_RESET; + exynos_blk_writel(exynos_pcie, val, PCIE_PHY_GLUE_REG0); + } else { + exynos_blk_writel(exynos_pcie, 1, PCIE_PHY_MAC_RESET); + } }
static void exynos_pcie_assert_phy_reset(struct pcie_port *pp) { struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
+ if (of_device_is_compatible(pp->dev->of_node, "samsung,gh7-pcie")) + return; + exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_MAC_RESET); exynos_blk_writel(exynos_pcie, 1, PCIE_PHY_GLOBAL_RESET); }
static void exynos_pcie_deassert_phy_reset(struct pcie_port *pp) { + u32 val; struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
- exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_GLOBAL_RESET); - exynos_elb_writel(exynos_pcie, 1, PCIE_PWR_RESET); - exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_COMMON_RESET); - exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_CMN_REG); - exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_TRSVREG_RESET); - exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_TRSV_RESET); + if (of_device_is_compatible(pp->dev->of_node, "samsung,gh7-pcie")) { + val = exynos_blk_readl(exynos_pcie, PCIE_PHY_GLUE_REG0); + val |= PCIE_PHY_GLUE_GLOBAL_RESET; + exynos_blk_writel(exynos_pcie, val, PCIE_PHY_GLUE_REG0); + exynos_elb_writel(exynos_pcie, 1, PCIE_PWR_RESET); + val = exynos_blk_readl(exynos_pcie, PCIE_PHY_GLUE_REG0); + val |= PCIE_PHY_GLUE_COMMON_RESET; + exynos_blk_writel(exynos_pcie, val, PCIE_PHY_GLUE_REG0); + } else { + exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_GLOBAL_RESET); + exynos_elb_writel(exynos_pcie, 1, PCIE_PWR_RESET); + exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_COMMON_RESET); + exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_CMN_REG); + exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_TRSVREG_RESET); + exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_TRSV_RESET); + } }
static void exynos_pcie_power_on_phy(struct pcie_port *pp) @@ -269,6 +343,9 @@ static void exynos_pcie_init_phy(struct pcie_port *pp) { struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
+ if (of_device_is_compatible(pp->dev->of_node, "samsung,gh7-pcie")) + return; + /* DCC feedback control off */ exynos_phy_writel(exynos_pcie, 0x29, PCIE_PHY_DCC_FEEDBACK);
@@ -305,6 +382,26 @@ static void exynos_pcie_init_phy(struct pcie_port *pp) exynos_phy_writel(exynos_pcie, 0xa0, PCIE_PHY_TRSV3_LVCC); }
+static void exynos_pcie_pulse_common_reset(struct pcie_port *pp) +{ + u32 val; + struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp); + + if (of_device_is_compatible(pp->dev->of_node, "samsung,gh7-pcie")) { + val = exynos_blk_readl(exynos_pcie, PCIE_PHY_GLUE_REG0); + val &= ~PCIE_PHY_GLUE_COMMON_RESET; + exynos_blk_writel(exynos_pcie, val, PCIE_PHY_GLUE_REG0); + udelay(500); + val = exynos_blk_readl(exynos_pcie, PCIE_PHY_GLUE_REG0); + val |= PCIE_PHY_GLUE_COMMON_RESET; + exynos_blk_writel(exynos_pcie, val, PCIE_PHY_GLUE_REG0); + } else { + exynos_blk_writel(exynos_pcie, 1, PCIE_PHY_COMMON_RESET); + udelay(500); + exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_COMMON_RESET); + } +} + static void exynos_pcie_assert_reset(struct pcie_port *pp) { struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp); @@ -326,6 +423,9 @@ static int exynos_pcie_establish_link(struct pcie_port *pp) return 0; }
+ /* set phy mode */ + exynos_pcie_set_phy_mode(pp); + /* assert reset signals */ exynos_pcie_assert_core_reset(pp); exynos_pcie_assert_phy_reset(pp); @@ -340,9 +440,7 @@ static int exynos_pcie_establish_link(struct pcie_port *pp) exynos_pcie_init_phy(pp);
/* pulse for common reset */ - exynos_blk_writel(exynos_pcie, 1, PCIE_PHY_COMMON_RESET); - udelay(500); - exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_COMMON_RESET); + exynos_pcie_pulse_common_reset(pp);
/* de-assert core reset */ exynos_pcie_deassert_core_reset(pp); @@ -357,6 +455,12 @@ static int exynos_pcie_establish_link(struct pcie_port *pp) exynos_elb_writel(exynos_pcie, PCIE_ELBI_LTSSM_ENABLE, PCIE_APP_LTSSM_ENABLE);
+ if (of_device_is_compatible(pp->dev->of_node, "samsung,gh7-pcie")) { + /* supply auxiliary power */ + exynos_elb_writel(exynos_pcie, PCIE_SYS_AUX_PWR_ENABLE, + PCIE_SYS_AUX_PWR_DET); + } + /* check if the link is up or not */ while (!dw_pcie_link_up(pp)) { mdelay(100); @@ -564,6 +668,7 @@ static int __init exynos_pcie_probe(struct platform_device *pdev) struct resource *elbi_base; struct resource *phy_base; struct resource *block_base; + struct resource *dbi_base; int ret;
exynos_pcie = devm_kzalloc(&pdev->dev, sizeof(*exynos_pcie), @@ -619,6 +724,15 @@ static int __init exynos_pcie_probe(struct platform_device *pdev) goto fail_bus_clk; }
+ if (of_device_is_compatible(pdev->dev.of_node, "samsung,gh7-pcie")) { + dbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 3); + pp->dbi_base = devm_ioremap_resource(&pdev->dev, dbi_base); + if (IS_ERR(pp->dbi_base)) { + ret = PTR_ERR(pp->dbi_base); + goto fail_bus_clk; + } + } + ret = add_pcie_port(pp, pdev); if (ret < 0) goto fail_bus_clk; @@ -645,6 +759,7 @@ static int __exit exynos_pcie_remove(struct platform_device *pdev)
static const struct of_device_id exynos_pcie_of_match[] = { { .compatible = "samsung,exynos5440-pcie", }, + { .compatible = "samsung,gh7-pcie", }, {}, }; MODULE_DEVICE_TABLE(of, exynos_pcie_of_match);
On Wednesday 16 April 2014, Jingoo Han wrote:
Samsung GH7 has four PCIe controllers which can be used as root complex for PCIe interface.
Signed-off-by: Jingoo Han jg1.han@samsung.com
drivers/pci/host/Kconfig | 2 +- drivers/pci/host/pci-exynos.c | 135 ++++++++++++++++++++++++++++++++++++++--- 2 files changed, 126 insertions(+), 11 deletions(-)
Can you explain how much the GH7 and Exynos front-ends actually have in common? Would it make sense to have a separate driver for gh7?
Also, if gh7 is expected to run a full firmware, I think you should do all the setup in the firmware before booting Linux, and just do the required run-time operations in the driver itself.
Arnd
Arnd Bergmann wrote:
On Wednesday 16 April 2014, Jingoo Han wrote:
Samsung GH7 has four PCIe controllers which can be used as root complex for PCIe interface.
Signed-off-by: Jingoo Han jg1.han@samsung.com
drivers/pci/host/Kconfig | 2 +- drivers/pci/host/pci-exynos.c | 135
++++++++++++++++++++++++++++++++++++++---
2 files changed, 126 insertions(+), 11 deletions(-)
+ Byungho An, Ilho Lee
Hi Arnd,
Can you explain how much the GH7 and Exynos front-ends actually have in common? Would it make sense to have a separate driver for gh7?
Basically, ARMv8 based GH7 has same PCIe hardware IP with previous ARMv7 based exynos5440, several features in PCIe are different though. In other words, basic functionalities for PCIe are same. So I think, would be nice if we could use one PCIe device driver for both SoCs.
However, if we need to support the PCIe with each own device driver because of difference of 32bit and 64bit, please kindly let us know. Honestly, I'm not sure about that right now.
Also, if gh7 is expected to run a full firmware, I think you should do all the setup in the firmware before booting Linux, and just do the required run-time operations in the driver itself.
Well, we're expecting that all the setup should be done by the device driver in kernel not firmware.
Thanks, Kukjin
On Wednesday 23 April 2014 18:19:30 Kukjin Kim wrote:
Basically, ARMv8 based GH7 has same PCIe hardware IP with previous ARMv7 based exynos5440, several features in PCIe are different though. In other words, basic functionalities for PCIe are same. So I think, would be nice if we could use one PCIe device driver for both SoCs.
Ok, I see. I was just trying to get a feeling for how much is shared or SoC specific between your variants. If they are different enough, it may be easier to have two drivers.
However, if we need to support the PCIe with each own device driver because of difference of 32bit and 64bit, please kindly let us know. Honestly, I'm not sure about that right now.
We are working already on ideas to minimize the differences between arm32 and arm64 PCI support, it will just take more work.
Also, if gh7 is expected to run a full firmware, I think you should do all the setup in the firmware before booting Linux, and just do the required run-time operations in the driver itself.
Well, we're expecting that all the setup should be done by the device driver in kernel not firmware.
Ok, just make sure this hardware never shows up in servers then.
Unfortunately we are in a tricky situation on arm64 because we have to support both server-type SoCs and embedded-type SoCs. In an embedded system, you can't trust the boot loader to do a proper setup of all the hardware, so the kernel needs full control over all the initialization. In a server, the initialization is the responsibility of the firmware, and we don't want the kernel to even know about those registers.
My hope is that all server chips use an SBSA compliant PCIe implementation, but we already have X-Gene, which is doing server workloads with a nonstandard PCIe, and it's possible that there will also be server-like systems with a DesignWare PCIe block instead of an SBSA compliant one. We can still support those, but I don't want to see more than one implementation of dw-pcie on servers. Just like we have the generic PCIe support that Will is doing for virtual machines and SBSA compliant systems, we would do one dw-pcie variant for all systems that come with a host firmware and rely on it being set up already.
Arnd
On Wed, Apr 23, 2014 at 12:03:47PM +0100, Arnd Bergmann wrote:
On Wednesday 23 April 2014 18:19:30 Kukjin Kim wrote:
Basically, ARMv8 based GH7 has same PCIe hardware IP with previous ARMv7 based exynos5440, several features in PCIe are different though. In other words, basic functionalities for PCIe are same. So I think, would be nice if we could use one PCIe device driver for both SoCs.
Ok, I see. I was just trying to get a feeling for how much is shared or SoC specific between your variants. If they are different enough, it may be easier to have two drivers.
However, if we need to support the PCIe with each own device driver because of difference of 32bit and 64bit, please kindly let us know. Honestly, I'm not sure about that right now.
We are working already on ideas to minimize the differences between arm32 and arm64 PCI support, it will just take more work.
Also, if gh7 is expected to run a full firmware, I think you should do all the setup in the firmware before booting Linux, and just do the required run-time operations in the driver itself.
Well, we're expecting that all the setup should be done by the device driver in kernel not firmware.
Ok, just make sure this hardware never shows up in servers then.
Not necessarily, as long as the setup will always happen in the kernel?
Unfortunately we are in a tricky situation on arm64 because we have to support both server-type SoCs and embedded-type SoCs. In an embedded system, you can't trust the boot loader to do a proper setup of all the hardware, so the kernel needs full control over all the initialization. In a server, the initialization is the responsibility of the firmware, and we don't want the kernel to even know about those registers.
My hope is that all server chips use an SBSA compliant PCIe implementation, but we already have X-Gene, which is doing server workloads with a nonstandard PCIe, and it's possible that there will also be server-like systems with a DesignWare PCIe block instead of an SBSA compliant one. We can still support those, but I don't want to see more than one implementation of dw-pcie on servers. Just like we have the generic PCIe support that Will is doing for virtual machines and SBSA compliant systems, we would do one dw-pcie variant for all systems that come with a host firmware and rely on it being set up already.
There is nothing in the SBSA that mandates firmware setup. All it requires is that hardware is setup in a way that is not specific to a board or a particular OEM. Surely if the setup being done for GH7 is always the same it should fit the bill?
Kind regards, Liviu
Arnd
To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wednesday 23 April 2014 15:23:16 Liviu Dudau wrote:
Unfortunately we are in a tricky situation on arm64 because we have to support both server-type SoCs and embedded-type SoCs. In an embedded system, you can't trust the boot loader to do a proper setup of all the hardware, so the kernel needs full control over all the initialization. In a server, the initialization is the responsibility of the firmware, and we don't want the kernel to even know about those registers.
My hope is that all server chips use an SBSA compliant PCIe implementation, but we already have X-Gene, which is doing server workloads with a nonstandard PCIe, and it's possible that there will also be server-like systems with a DesignWare PCIe block instead of an SBSA compliant one. We can still support those, but I don't want to see more than one implementation of dw-pcie on servers. Just like we have the generic PCIe support that Will is doing for virtual machines and SBSA compliant systems, we would do one dw-pcie variant for all systems that come with a host firmware and rely on it being set up already.
There is nothing in the SBSA that mandates firmware setup. All it requires is that hardware is setup in a way that is not specific to a board or a particular OEM. Surely if the setup being done for GH7 is always the same it should fit the bill?
GH7 is already not SBSA compliant because it uses a nonstandard config space access method, and it uses its own MSI controller rather than GIC. This means it violates at least two out of the four clauses in SBSA describing PCIe.
Regardless of this, the level of detail describing config space and MSI handling in SBSA can only make sense if the purpose is to handle all compliant implementations without platform specific code. If you require platform specific setup code in the OS, this underlying assumption no longer holds true and there is no point in having a spec in the first place.
I think we should treat DW-PCIe in the same way if anyone attempts to use that in a server, e.g. in SBSA level 0. As you can see here, even when reusing hardware between Exynos and GH7, you can't just use the same init code, so it has to be in firmware to be any good. On a real server platform, you can't require a kernel upgrade every time a new SoC comes out, any basic functionality like PCI just has to work with existing OS images.
Arnd
Arnd Bergmann wrote:
On Wednesday 23 April 2014 15:23:16 Liviu Dudau wrote:
Unfortunately we are in a tricky situation on arm64 because we have to support both server-type SoCs and embedded-type SoCs. In an embedded system, you can't trust the boot loader to do a proper setup of all the hardware, so the kernel needs full control over all the initialization. In a server, the initialization is the responsibility of the firmware, and we don't want the kernel to even know about those registers.
BTW, actually we can trust boot-loader to do required things in mobile also ;-)
My hope is that all server chips use an SBSA compliant PCIe implementation, but we already have X-Gene, which is doing server workloads with a nonstandard PCIe, and it's possible that there will also be server-like systems with a DesignWare PCIe block instead of an SBSA compliant one. We can still support those, but I don't want to see more than one implementation of dw-pcie on servers. Just like we have the generic PCIe support that Will is doing for virtual machines and SBSA compliant systems, we would do one dw-pcie variant for all systems that come with a host firmware and rely on it being set up already.
OK and I think, just one device driver would be nice for whatever embedded or server.
There is nothing in the SBSA that mandates firmware setup. All it requires
Yeah, I couldn't look at that in the SBSA...
is that hardware is setup in a way that is not specific to a board or a particular OEM. Surely if the setup being done for GH7 is always the same it should fit the bill?
But Arnd's comments are about firmware based on each SoC not board?...
GH7 is already not SBSA compliant because it uses a nonstandard config space access method, and it uses its own MSI controller rather than GIC. This means it violates at least two out of the four clauses in SBSA describing PCIe.
OK, I see. Honestly, we just focused on how to support PCI on both exynos5440 and GH7 SoCs.
Regardless of this, the level of detail describing config space and MSI handling in SBSA can only make sense if the purpose is to handle all compliant implementations without platform specific code. If you require platform specific setup code in the OS, this underlying assumption no longer holds true and there is no point in having a spec in the first place.
OK, your assumption makes sense to us.
I think we should treat DW-PCIe in the same way if anyone attempts to use that in a server, e.g. in SBSA level 0. As you can see here,
Agreed. BTW, how about GICv2m for level 1? It can be supported with the same way in one DW-PCIe driver?
even when reusing hardware between Exynos and GH7, you can't just use the same init code, so it has to be in firmware to be any good. On a real server platform, you can't require a kernel upgrade every time a new SoC comes out, any basic functionality like PCI just has to work with existing OS images.
OK, when Will's driver is ready, we will test it on GH7 with the setup for PCIe included in firmware. Anyway I hope we can use the driver in 3.16 :-)
Thanks, Kukjin
On Thursday 24 April 2014 13:53:47 Kukjin Kim wrote:
Arnd Bergmann wrote:
On Wednesday 23 April 2014 15:23:16 Liviu Dudau wrote:
Unfortunately we are in a tricky situation on arm64 because we have to support both server-type SoCs and embedded-type SoCs. In an embedded system, you can't trust the boot loader to do a proper setup of all the hardware, so the kernel needs full control over all the initialization. In a server, the initialization is the responsibility of the firmware, and we don't want the kernel to even know about those registers.
BTW, actually we can trust boot-loader to do required things in mobile also ;-)
Not really. Those boot loaders do the bare minimum to get one kernel running. There is no testing done with distro kernels (usually because they won't work anyway), and they wouldn't touch (or describe in DT) any hardware that isn't actually used by the kernel that initially ships with the device.
If we could trust the boot loader to do all the necessary setup, we would get rid of a lot of kernel code.
My hope is that all server chips use an SBSA compliant PCIe implementation, but we already have X-Gene, which is doing server workloads with a nonstandard PCIe, and it's possible that there will also be server-like systems with a DesignWare PCIe block instead of an SBSA compliant one. We can still support those, but I don't want to see more than one implementation of dw-pcie on servers. Just like we have the generic PCIe support that Will is doing for virtual machines and SBSA compliant systems, we would do one dw-pcie variant for all systems that come with a host firmware and rely on it being set up already.
OK and I think, just one device driver would be nice for whatever embedded or server.
The runtime parts (e.g. config space access) should definitely be shared, and we should also share any peripheral drivers.
However, basic infrastructure like PCI on servers should just work and you really shouldn't require any driver code for it. SBSA gets this part right by defining the config space layout, so we can have a very trivial PCI host driver for all SBSA systems, even if the same hardware needs a SoC specific driver for embedded systems that don't initialize the PCI host at boot time.
There is also nothing wrong with embedded systems doing it the same way as servers and initializing everything before Linux starts. We just need to be prepared to add fixups when someone gets it wrong.
I think we should treat DW-PCIe in the same way if anyone attempts to use that in a server, e.g. in SBSA level 0. As you can see here,
Agreed. BTW, how about GICv2m for level 1? It can be supported with the same way in one DW-PCIe driver?
I don't think anybody has done a DT binding for GICv2m or submitted a patch yet, but I'm pretty sure it can be done. We just need to come up with a proper DT representation to pick which MSI controller is used by default.
Hardware-wise you should be able to mix any combination of MSI controllers, but I would suspect that if there is a GICv2m or higher, we would always want to use that for performance reasons. The MSI controller in the dw-pcie block just sends a normal interrupt to the GIC, which means you lose all the benefits of using MSI.
even when reusing hardware between Exynos and GH7, you can't just use the same init code, so it has to be in firmware to be any good. On a real server platform, you can't require a kernel upgrade every time a new SoC comes out, any basic functionality like PCI just has to work with existing OS images.
OK, when Will's driver is ready, we will test it on GH7 with the setup for PCIe included in firmware. Anyway I hope we can use the driver in 3.16 :-)
Ok, sounds good.
Arnd
On Wed, Apr 23, 2014 at 10:19:30AM +0100, Kukjin Kim wrote:
Arnd Bergmann wrote:
On Wednesday 16 April 2014, Jingoo Han wrote:
Samsung GH7 has four PCIe controllers which can be used as root complex for PCIe interface.
Signed-off-by: Jingoo Han jg1.han@samsung.com
drivers/pci/host/Kconfig | 2 +- drivers/pci/host/pci-exynos.c | 135
++++++++++++++++++++++++++++++++++++++---
2 files changed, 126 insertions(+), 11 deletions(-)
- Byungho An, Ilho Lee
Hi Arnd,
Can you explain how much the GH7 and Exynos front-ends actually have in common? Would it make sense to have a separate driver for gh7?
Basically, ARMv8 based GH7 has same PCIe hardware IP with previous ARMv7 based exynos5440, several features in PCIe are different though. In other words, basic functionalities for PCIe are same. So I think, would be nice if we could use one PCIe device driver for both SoCs.
However, if we need to support the PCIe with each own device driver because of difference of 32bit and 64bit, please kindly let us know. Honestly, I'm not sure about that right now.
Hi Kukjin,
I will let Arnd offer his view as a maintainer of DT enabled platforms for arch/arm, but in my understanding the goal is to convert individual host bridge drivers to use my patch series directly, as they intentionally don't depend on any arch specific code and then leave the existing bios32 code for the non-DT platforms and the ones that do not see the need to convert to the framework.
Rob Herring has posted an example on how he can add support for a host bridge running under arm32 that uses my framework, so it is not an impossible task and can be used as an example for future conversions.
Does that make sense?
Best regards, Liviu
Also, if gh7 is expected to run a full firmware, I think you should do all the setup in the firmware before booting Linux, and just do the required run-time operations in the driver itself.
Well, we're expecting that all the setup should be done by the device driver in kernel not firmware.
Thanks, Kukjin
On Wednesday 23 April 2014, Liviu Dudau wrote:
On Wed, Apr 23, 2014 at 10:19:30AM +0100, Kukjin Kim wrote:
Arnd Bergmann wrote:
Can you explain how much the GH7 and Exynos front-ends actually have in common? Would it make sense to have a separate driver for gh7?
Basically, ARMv8 based GH7 has same PCIe hardware IP with previous ARMv7 based exynos5440, several features in PCIe are different though. In other words, basic functionalities for PCIe are same. So I think, would be nice if we could use one PCIe device driver for both SoCs.
However, if we need to support the PCIe with each own device driver because of difference of 32bit and 64bit, please kindly let us know. Honestly, I'm not sure about that right now.
Hi Kukjin,
I will let Arnd offer his view as a maintainer of DT enabled platforms for arch/arm, but in my understanding the goal is to convert individual host bridge drivers to use my patch series directly, as they intentionally don't depend on any arch specific code and then leave the existing bios32 code for the non-DT platforms and the ones that do not see the need to convert to the framework.
Rob Herring has posted an example on how he can add support for a host bridge running under arm32 that uses my framework, so it is not an impossible task and can be used as an example for future conversions.
Yes, I agree that would be the best approach. I'm not sure if it makes sense to convert the various dw-pcie front-ends separately or if we have to do them all at once though. Doing them together may require some more coordination.
Let's also make sure to keep Russell in the loop regarding arm32 PCI support. He probably has some ideas as well on how we should proceed with the existing code.
Arnd
linaro-kernel@lists.linaro.org