Hi,
This patchset brings support for Silicon Labs' Co-Processor Communication (CPC) protocol as transport layer for Greybus. This is introduced as a module that sits between Greybus and CPC Host Device Drivers implementations, like SDIO or SPI. This patchset includes SDIO as physical layer but the protocol is not final and might change, it's mostly there to showcase all the elements.
+----------------------------------------------------+ | Greybus | +----------------------------------------------------+ /|\ | |/ +----------------------------------------------------+ | CPC | +----------------------------------------------------+ /|\ /|\ /|\ | | | |/ |/ |/ +----------+ +---------+ +-----------+ | SDIO | | SPI | | Others | +----------+ +---------+ +-----------+
CPC implements some of the features of Unipro that Greybus relies upon, like reliable transmission. CPC takes care of detecting transmission errors and retransmit frames if necessary, but that feature is not part of the RFC to keep it concise. There's also a flow-control feature, preventing sending messages to already full cports.
In order to implement these features, a 4-byte header is prepended to Greybus messages, making the whole header 12 bytes (Greybus header itself is 8 bytes).
This RFC starts by implementing a shim layer between physical bus drivers (like SDIO and SPI) and Greybus, and progressively add more elements to it to make it useful in its own right. Finally, an SDIO driver is added to enable the communication with a remote device.
Changes since the RFC: - added missing Signed-off-by on one commit - added SDIO driver to give a full example
Damien Riégel (13): greybus: cpc: add minimal CPC Host Device infrastructure greybus: cpc: introduce CPC cport structure greybus: cpc: use socket buffers instead of gb_message in TX path greybus: cpc: pack cport ID in Greybus header greybus: cpc: switch RX path to socket buffers greybus: cpc: introduce CPC header structure greybus: cpc: account for CPC header size in RX and TX path greybus: cpc: add and validate sequence numbers greybus: cpc: acknowledge all incoming messages greybus: cpc: use holding queue instead of sending out immediately greybus: cpc: honour remote's RX window greybus: cpc: let host device drivers dequeue TX frames greybus: cpc: add private data pointer in CPC Host Device
Gabriel Beaulieu (1): greybus: cpc: add CPC SDIO host driver
MAINTAINERS | 6 + drivers/greybus/Kconfig | 2 + drivers/greybus/Makefile | 2 + drivers/greybus/cpc/Kconfig | 22 ++ drivers/greybus/cpc/Makefile | 9 + drivers/greybus/cpc/cpc.h | 75 +++++ drivers/greybus/cpc/cport.c | 107 +++++++ drivers/greybus/cpc/header.c | 146 +++++++++ drivers/greybus/cpc/header.h | 55 ++++ drivers/greybus/cpc/host.c | 313 +++++++++++++++++++ drivers/greybus/cpc/host.h | 63 ++++ drivers/greybus/cpc/protocol.c | 167 ++++++++++ drivers/greybus/cpc/sdio.c | 554 +++++++++++++++++++++++++++++++++ 13 files changed, 1521 insertions(+) create mode 100644 drivers/greybus/cpc/Kconfig create mode 100644 drivers/greybus/cpc/Makefile create mode 100644 drivers/greybus/cpc/cpc.h create mode 100644 drivers/greybus/cpc/cport.c create mode 100644 drivers/greybus/cpc/header.c create mode 100644 drivers/greybus/cpc/header.h create mode 100644 drivers/greybus/cpc/host.c create mode 100644 drivers/greybus/cpc/host.h create mode 100644 drivers/greybus/cpc/protocol.c create mode 100644 drivers/greybus/cpc/sdio.c
As the first step for adding CPC support with Greybus, add a very minimal module for CPC Host Devices. For now, this module only proxies calls to the Greybus Host Device API and does nothing useful, but further commits will use this base to add features.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- MAINTAINERS | 6 +++ drivers/greybus/Kconfig | 2 + drivers/greybus/Makefile | 2 + drivers/greybus/cpc/Kconfig | 10 +++++ drivers/greybus/cpc/Makefile | 6 +++ drivers/greybus/cpc/host.c | 84 ++++++++++++++++++++++++++++++++++++ drivers/greybus/cpc/host.h | 40 +++++++++++++++++ 7 files changed, 150 insertions(+) create mode 100644 drivers/greybus/cpc/Kconfig create mode 100644 drivers/greybus/cpc/Makefile create mode 100644 drivers/greybus/cpc/host.c create mode 100644 drivers/greybus/cpc/host.h
diff --git a/MAINTAINERS b/MAINTAINERS index 6d1de82e6dc..56daf9ec310 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10774,6 +10774,12 @@ S: Maintained F: Documentation/devicetree/bindings/net/ti,cc1352p7.yaml F: drivers/greybus/gb-beagleplay.c
+GREYBUS CPC DRIVERS +M: Damien Riégel damien.riegel@silabs.com +R: Silicon Labs Kernel Team linux-devel@silabs.com +S: Supported +F: drivers/greybus/cpc/* + GREYBUS SUBSYSTEM M: Johan Hovold johan@kernel.org M: Alex Elder elder@kernel.org diff --git a/drivers/greybus/Kconfig b/drivers/greybus/Kconfig index c3f056d28b0..565a0fdcb2c 100644 --- a/drivers/greybus/Kconfig +++ b/drivers/greybus/Kconfig @@ -30,6 +30,8 @@ config GREYBUS_BEAGLEPLAY To compile this code as a module, chose M here: the module will be called gb-beagleplay.ko
+source "drivers/greybus/cpc/Kconfig" + config GREYBUS_ES2 tristate "Greybus ES3 USB host controller" depends on USB diff --git a/drivers/greybus/Makefile b/drivers/greybus/Makefile index d986e94f889..92fe1d62691 100644 --- a/drivers/greybus/Makefile +++ b/drivers/greybus/Makefile @@ -21,6 +21,8 @@ ccflags-y += -I$(src) obj-$(CONFIG_GREYBUS_BEAGLEPLAY) += gb-beagleplay.o
# Greybus Host controller drivers +obj-$(CONFIG_GREYBUS_CPC) += cpc/ + gb-es2-y := es2.o
obj-$(CONFIG_GREYBUS_ES2) += gb-es2.o diff --git a/drivers/greybus/cpc/Kconfig b/drivers/greybus/cpc/Kconfig new file mode 100644 index 00000000000..ab96fedd0de --- /dev/null +++ b/drivers/greybus/cpc/Kconfig @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: GPL-2.0 + +config GREYBUS_CPC + tristate "Greybus CPC driver" + help + Select this option if you have a Silicon Labs device that acts as a + Greybus SVC. + + To compile this code as a module, chose M here: the module will be + called gb-cpc.ko diff --git a/drivers/greybus/cpc/Makefile b/drivers/greybus/cpc/Makefile new file mode 100644 index 00000000000..490982a0ff5 --- /dev/null +++ b/drivers/greybus/cpc/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 + +gb-cpc-y := host.o + +# CPC core +obj-$(CONFIG_GREYBUS_CPC) += gb-cpc.o diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c new file mode 100644 index 00000000000..80516517ff6 --- /dev/null +++ b/drivers/greybus/cpc/host.c @@ -0,0 +1,84 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#include <linux/err.h> +#include <linux/greybus.h> +#include <linux/module.h> + +#include "host.h" + +static struct cpc_host_device *gb_hd_to_cpc_hd(struct gb_host_device *hd) +{ + return (struct cpc_host_device *)&hd->hd_priv; +} + +static int cpc_gb_message_send(struct gb_host_device *gb_hd, u16 cport_id, + struct gb_message *message, gfp_t gfp_mask) +{ + struct cpc_host_device *cpc_hd = gb_hd_to_cpc_hd(gb_hd); + + return cpc_hd->driver->message_send(cpc_hd, cport_id, message, gfp_mask); +} + +static void cpc_gb_message_cancel(struct gb_message *message) +{ + /* Not implemented */ +} + +static struct gb_hd_driver cpc_gb_driver = { + .hd_priv_size = sizeof(struct cpc_host_device), + .message_send = cpc_gb_message_send, + .message_cancel = cpc_gb_message_cancel, +}; + +struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct device *parent) +{ + struct cpc_host_device *cpc_hd; + struct gb_host_device *hd; + + if ((!driver->message_send) || (!driver->message_cancel)) { + dev_err(parent, "missing mandatory callbacks\n"); + return ERR_PTR(-EINVAL); + } + + hd = gb_hd_create(&cpc_gb_driver, parent, GB_CPC_MSG_SIZE_MAX, GB_CPC_NUM_CPORTS); + if (IS_ERR(hd)) + return (struct cpc_host_device *)hd; + + cpc_hd = gb_hd_to_cpc_hd(hd); + cpc_hd->gb_hd = hd; + cpc_hd->driver = driver; + + return cpc_hd; +} +EXPORT_SYMBOL_GPL(cpc_hd_create); + +int cpc_hd_add(struct cpc_host_device *cpc_hd) +{ + return gb_hd_add(cpc_hd->gb_hd); +} +EXPORT_SYMBOL_GPL(cpc_hd_add); + +void cpc_hd_put(struct cpc_host_device *cpc_hd) +{ + return gb_hd_put(cpc_hd->gb_hd); +} +EXPORT_SYMBOL_GPL(cpc_hd_put); + +void cpc_hd_del(struct cpc_host_device *cpc_hd) +{ + return gb_hd_del(cpc_hd->gb_hd); +} +EXPORT_SYMBOL_GPL(cpc_hd_del); + +void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, u16 cport_id, u8 *data, size_t length) +{ + greybus_data_rcvd(cpc_hd->gb_hd, cport_id, data, length); +} +EXPORT_SYMBOL_GPL(cpc_hd_rcvd); + +MODULE_DESCRIPTION("Greybus over CPC"); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Silicon Laboratories, Inc."); diff --git a/drivers/greybus/cpc/host.h b/drivers/greybus/cpc/host.h new file mode 100644 index 00000000000..f55feb303f4 --- /dev/null +++ b/drivers/greybus/cpc/host.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#ifndef __CPC_HOST_H +#define __CPC_HOST_H + +#include <linux/device.h> +#include <linux/greybus.h> +#include <linux/types.h> + +#define GB_CPC_MSG_SIZE_MAX 4096 +#define GB_CPC_NUM_CPORTS 8 + +struct cpc_host_device; + +struct cpc_hd_driver { + int (*message_send)(struct cpc_host_device *hd, u16 dest_cport_id, + struct gb_message *message, gfp_t gfp_mask); + void (*message_cancel)(struct gb_message *message); +}; + +/** + * struct cpc_host_device - CPC host device. + * @gb_hd: pointer to Greybus Host Device this device belongs to. + * @driver: driver operations. + */ +struct cpc_host_device { + struct gb_host_device *gb_hd; + const struct cpc_hd_driver *driver; +}; + +struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct device *parent); +int cpc_hd_add(struct cpc_host_device *cpc_hd); +void cpc_hd_put(struct cpc_host_device *cpc_hd); +void cpc_hd_del(struct cpc_host_device *cpc_hd); +void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, u16 cport_id, u8 *data, size_t length); + +#endif
A number of CPC features, like retransmission or remote's receive window, are cport-based. In order to prepare for these features, introduce a CPC CPort context structure.
The CPC Host Device module now implements cport_allocate and cport_release callbacks in order to allocate and release these structures when requested by Greybus module.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/Makefile | 2 +- drivers/greybus/cpc/cpc.h | 29 ++++++++++ drivers/greybus/cpc/cport.c | 37 ++++++++++++ drivers/greybus/cpc/host.c | 109 ++++++++++++++++++++++++++++++++++- drivers/greybus/cpc/host.h | 12 ++++ 5 files changed, 187 insertions(+), 2 deletions(-) create mode 100644 drivers/greybus/cpc/cpc.h create mode 100644 drivers/greybus/cpc/cport.c
diff --git a/drivers/greybus/cpc/Makefile b/drivers/greybus/cpc/Makefile index 490982a0ff5..3d50f8c5473 100644 --- a/drivers/greybus/cpc/Makefile +++ b/drivers/greybus/cpc/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0
-gb-cpc-y := host.o +gb-cpc-y := cport.o host.o
# CPC core obj-$(CONFIG_GREYBUS_CPC) += gb-cpc.o diff --git a/drivers/greybus/cpc/cpc.h b/drivers/greybus/cpc/cpc.h new file mode 100644 index 00000000000..3915a7fbc4f --- /dev/null +++ b/drivers/greybus/cpc/cpc.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#ifndef __CPC_H +#define __CPC_H + +#include <linux/device.h> +#include <linux/greybus.h> +#include <linux/types.h> + +/** + * struct cpc_cport - CPC cport + * @id: cport ID + * @cpc_hd: pointer to the CPC host device this cport belongs to + */ +struct cpc_cport { + u16 id; + + struct cpc_host_device *cpc_hd; +}; + +struct cpc_cport *cpc_cport_alloc(u16 cport_id, gfp_t gfp_mask); +void cpc_cport_release(struct cpc_cport *cport); + +int cpc_cport_message_send(struct cpc_cport *cport, struct gb_message *message, gfp_t gfp_mask); + +#endif diff --git a/drivers/greybus/cpc/cport.c b/drivers/greybus/cpc/cport.c new file mode 100644 index 00000000000..88bdb2f8182 --- /dev/null +++ b/drivers/greybus/cpc/cport.c @@ -0,0 +1,37 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#include "cpc.h" +#include "host.h" + +/** + * cpc_cport_alloc() - Allocate and initialize CPC cport. + * @cport_id: cport ID. + * @gfp_mask: GFP mask for allocation. + * + * Return: Pointer to allocated and initialized cpc_cport, or NULL on failure. + */ +struct cpc_cport *cpc_cport_alloc(u16 cport_id, gfp_t gfp_mask) +{ + struct cpc_cport *cport; + + cport = kzalloc(sizeof(*cport), gfp_mask); + if (!cport) + return NULL; + + cport->id = cport_id; + + return cport; +} + +void cpc_cport_release(struct cpc_cport *cport) +{ + kfree(cport); +} + +int cpc_cport_message_send(struct cpc_cport *cport, struct gb_message *message, gfp_t gfp_mask) +{ + return cport->cpc_hd->driver->message_send(cport->cpc_hd, cport->id, message, gfp_mask); +} diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c index 80516517ff6..98566ce7755 100644 --- a/drivers/greybus/cpc/host.c +++ b/drivers/greybus/cpc/host.c @@ -7,6 +7,7 @@ #include <linux/greybus.h> #include <linux/module.h>
+#include "cpc.h" #include "host.h"
static struct cpc_host_device *gb_hd_to_cpc_hd(struct gb_host_device *hd) @@ -14,12 +15,95 @@ static struct cpc_host_device *gb_hd_to_cpc_hd(struct gb_host_device *hd) return (struct cpc_host_device *)&hd->hd_priv; }
+static struct cpc_cport *cpc_hd_get_cport(struct cpc_host_device *cpc_hd, u16 cport_id) +{ + struct cpc_cport *cport; + + mutex_lock(&cpc_hd->lock); + for (int i = 0; i < ARRAY_SIZE(cpc_hd->cports); i++) { + cport = cpc_hd->cports[i]; + if (cport && cport->id == cport_id) + goto unlock; + } + + cport = NULL; + +unlock: + mutex_unlock(&cpc_hd->lock); + + return cport; +} + +static int cpc_hd_message_send(struct cpc_host_device *cpc_hd, u16 cport_id, + struct gb_message *message, gfp_t gfp_mask) +{ + struct cpc_cport *cport; + + cport = cpc_hd_get_cport(cpc_hd, cport_id); + if (!cport) { + dev_err(cpc_hd_dev(cpc_hd), "message_send: cport %u not found\n", cport_id); + return -EINVAL; + } + + return cpc_cport_message_send(cport, message, gfp_mask); +} + +static int cpc_hd_cport_allocate(struct cpc_host_device *cpc_hd, int cport_id, unsigned long flags) +{ + struct cpc_cport *cport; + int ret; + + mutex_lock(&cpc_hd->lock); + for (int i = 0; i < ARRAY_SIZE(cpc_hd->cports); i++) { + if (cpc_hd->cports[i] != NULL) + continue; + + if (cport_id < 0) + cport_id = i; + + cport = cpc_cport_alloc(cport_id, GFP_KERNEL); + if (!cport) { + ret = -ENOMEM; + goto unlock; + } + + cport->cpc_hd = cpc_hd; + + cpc_hd->cports[i] = cport; + ret = cport_id; + goto unlock; + } + + ret = -ENOSPC; +unlock: + mutex_unlock(&cpc_hd->lock); + + return ret; +} + +static void cpc_hd_cport_release(struct cpc_host_device *cpc_hd, u16 cport_id) +{ + struct cpc_cport *cport; + + mutex_lock(&cpc_hd->lock); + for (int i = 0; i < ARRAY_SIZE(cpc_hd->cports); i++) { + cport = cpc_hd->cports[i]; + + if (cport && cport->id == cport_id) { + cpc_cport_release(cport); + cpc_hd->cports[i] = NULL; + break; + } + } + mutex_unlock(&cpc_hd->lock); +} + static int cpc_gb_message_send(struct gb_host_device *gb_hd, u16 cport_id, struct gb_message *message, gfp_t gfp_mask) { struct cpc_host_device *cpc_hd = gb_hd_to_cpc_hd(gb_hd);
- return cpc_hd->driver->message_send(cpc_hd, cport_id, message, gfp_mask); + return cpc_hd_message_send(cpc_hd, cport_id, message, gfp_mask); }
static void cpc_gb_message_cancel(struct gb_message *message) @@ -27,12 +111,33 @@ static void cpc_gb_message_cancel(struct gb_message *message) /* Not implemented */ }
+static int cpc_gb_cport_allocate(struct gb_host_device *gb_hd, int cport_id, unsigned long flags) +{ + struct cpc_host_device *cpc_hd = gb_hd_to_cpc_hd(gb_hd); + + return cpc_hd_cport_allocate(cpc_hd, cport_id, flags); +} + +static void cpc_gb_cport_release(struct gb_host_device *gb_hd, u16 cport_id) +{ + struct cpc_host_device *cpc_hd = gb_hd_to_cpc_hd(gb_hd); + + return cpc_hd_cport_release(cpc_hd, cport_id); +} + static struct gb_hd_driver cpc_gb_driver = { .hd_priv_size = sizeof(struct cpc_host_device), .message_send = cpc_gb_message_send, .message_cancel = cpc_gb_message_cancel, + .cport_allocate = cpc_gb_cport_allocate, + .cport_release = cpc_gb_cport_release, };
+static void cpc_hd_init(struct cpc_host_device *cpc_hd) +{ + mutex_init(&cpc_hd->lock); +} + struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct device *parent) { struct cpc_host_device *cpc_hd; @@ -51,6 +156,8 @@ struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct devic cpc_hd->gb_hd = hd; cpc_hd->driver = driver;
+ cpc_hd_init(cpc_hd); + return cpc_hd; } EXPORT_SYMBOL_GPL(cpc_hd_create); diff --git a/drivers/greybus/cpc/host.h b/drivers/greybus/cpc/host.h index f55feb303f4..c3f2f56a939 100644 --- a/drivers/greybus/cpc/host.h +++ b/drivers/greybus/cpc/host.h @@ -8,11 +8,13 @@
#include <linux/device.h> #include <linux/greybus.h> +#include <linux/mutex.h> #include <linux/types.h>
#define GB_CPC_MSG_SIZE_MAX 4096 #define GB_CPC_NUM_CPORTS 8
+struct cpc_cport; struct cpc_host_device;
struct cpc_hd_driver { @@ -25,12 +27,22 @@ struct cpc_hd_driver { * struct cpc_host_device - CPC host device. * @gb_hd: pointer to Greybus Host Device this device belongs to. * @driver: driver operations. + * @lock: mutex to synchronize access to cport array. + * @cports: array of cport pointers allocated by Greybus core. */ struct cpc_host_device { struct gb_host_device *gb_hd; const struct cpc_hd_driver *driver; + + struct mutex lock; /* Synchronize access to cports */ + struct cpc_cport *cports[GB_CPC_NUM_CPORTS]; };
+static inline struct device *cpc_hd_dev(struct cpc_host_device *cpc_hd) +{ + return &cpc_hd->gb_hd->dev; +} + struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct device *parent); int cpc_hd_add(struct cpc_host_device *cpc_hd); void cpc_hd_put(struct cpc_host_device *cpc_hd);
CPC comes with its own header, that is not yet implemented. Without skb, the CPC host device drivers have to get two pointers to get a full packet: one pointer to the CPC header and one pointer to the GB message. In order to make their implementations simpler, convert the GB message into an SKB.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/cpc.h | 11 +++++++++- drivers/greybus/cpc/cport.c | 11 ++++++++-- drivers/greybus/cpc/host.c | 41 ++++++++++++++++++++++++++++++++++--- drivers/greybus/cpc/host.h | 7 ++++--- 4 files changed, 61 insertions(+), 9 deletions(-)
diff --git a/drivers/greybus/cpc/cpc.h b/drivers/greybus/cpc/cpc.h index 3915a7fbc4f..d9f8f60913a 100644 --- a/drivers/greybus/cpc/cpc.h +++ b/drivers/greybus/cpc/cpc.h @@ -24,6 +24,15 @@ struct cpc_cport { struct cpc_cport *cpc_cport_alloc(u16 cport_id, gfp_t gfp_mask); void cpc_cport_release(struct cpc_cport *cport);
-int cpc_cport_message_send(struct cpc_cport *cport, struct gb_message *message, gfp_t gfp_mask); +int cpc_cport_transmit(struct cpc_cport *cport, struct sk_buff *skb); + +struct cpc_skb_cb { + struct cpc_cport *cport; + + /* Keep track of the GB message the skb originates from */ + struct gb_message *gb_message; +}; + +#define CPC_SKB_CB(__skb) ((struct cpc_skb_cb *)&((__skb)->cb[0]))
#endif diff --git a/drivers/greybus/cpc/cport.c b/drivers/greybus/cpc/cport.c index 88bdb2f8182..ed0b8e8b0d7 100644 --- a/drivers/greybus/cpc/cport.c +++ b/drivers/greybus/cpc/cport.c @@ -31,7 +31,14 @@ void cpc_cport_release(struct cpc_cport *cport) kfree(cport); }
-int cpc_cport_message_send(struct cpc_cport *cport, struct gb_message *message, gfp_t gfp_mask) +/** + * cpc_cport_transmit() - Transmit skb over cport. + * @cport: cport. + * @skb: skb to be transmitted. + */ +int cpc_cport_transmit(struct cpc_cport *cport, struct sk_buff *skb) { - return cport->cpc_hd->driver->message_send(cport->cpc_hd, cport->id, message, gfp_mask); + struct cpc_host_device *cpc_hd = cport->cpc_hd; + + return cpc_hd_send_skb(cpc_hd, skb); } diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c index 98566ce7755..ee090dd3097 100644 --- a/drivers/greybus/cpc/host.c +++ b/drivers/greybus/cpc/host.c @@ -6,6 +6,7 @@ #include <linux/err.h> #include <linux/greybus.h> #include <linux/module.h> +#include <linux/skbuff.h>
#include "cpc.h" #include "host.h" @@ -38,6 +39,8 @@ static int cpc_hd_message_send(struct cpc_host_device *cpc_hd, u16 cport_id, struct gb_message *message, gfp_t gfp_mask) { struct cpc_cport *cport; + struct sk_buff *skb; + unsigned int size;
cport = cpc_hd_get_cport(cpc_hd, cport_id); if (!cport) { @@ -45,7 +48,18 @@ static int cpc_hd_message_send(struct cpc_host_device *cpc_hd, u16 cport_id, return -EINVAL; }
- return cpc_cport_message_send(cport, message, gfp_mask); + size = sizeof(*message->header) + message->payload_size; + skb = alloc_skb(size, gfp_mask); + if (!skb) + return -ENOMEM; + + /* Header and payload are already contiguous in Greybus message */ + skb_put_data(skb, message->buffer, sizeof(*message->header) + message->payload_size); + + CPC_SKB_CB(skb)->cport = cport; + CPC_SKB_CB(skb)->gb_message = message; + + return cpc_cport_transmit(cport, skb); }
static int cpc_hd_cport_allocate(struct cpc_host_device *cpc_hd, int cport_id, unsigned long flags) @@ -143,8 +157,8 @@ struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct devic struct cpc_host_device *cpc_hd; struct gb_host_device *hd;
- if ((!driver->message_send) || (!driver->message_cancel)) { - dev_err(parent, "missing mandatory callbacks\n"); + if (!driver->transmit) { + dev_err(parent, "missing mandatory callback\n"); return ERR_PTR(-EINVAL); }
@@ -180,12 +194,33 @@ void cpc_hd_del(struct cpc_host_device *cpc_hd) } EXPORT_SYMBOL_GPL(cpc_hd_del);
+void cpc_hd_message_sent(struct sk_buff *skb, int status) +{ + struct cpc_host_device *cpc_hd = CPC_SKB_CB(skb)->cport->cpc_hd; + struct gb_host_device *hd = cpc_hd->gb_hd; + + greybus_message_sent(hd, CPC_SKB_CB(skb)->gb_message, status); +} +EXPORT_SYMBOL_GPL(cpc_hd_message_sent); + void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, u16 cport_id, u8 *data, size_t length) { greybus_data_rcvd(cpc_hd->gb_hd, cport_id, data, length); } EXPORT_SYMBOL_GPL(cpc_hd_rcvd);
+/** + * cpc_hd_send_skb() - Queue a socket buffer for transmission. + * @cpc_hd: Host device to send SKB over. + * @skb: SKB to send. + */ +int cpc_hd_send_skb(struct cpc_host_device *cpc_hd, struct sk_buff *skb) +{ + const struct cpc_hd_driver *drv = cpc_hd->driver; + + return drv->transmit(cpc_hd, skb); +} + MODULE_DESCRIPTION("Greybus over CPC"); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Silicon Laboratories, Inc."); diff --git a/drivers/greybus/cpc/host.h b/drivers/greybus/cpc/host.h index c3f2f56a939..191b5e394a6 100644 --- a/drivers/greybus/cpc/host.h +++ b/drivers/greybus/cpc/host.h @@ -18,9 +18,7 @@ struct cpc_cport; struct cpc_host_device;
struct cpc_hd_driver { - int (*message_send)(struct cpc_host_device *hd, u16 dest_cport_id, - struct gb_message *message, gfp_t gfp_mask); - void (*message_cancel)(struct gb_message *message); + int (*transmit)(struct cpc_host_device *hd, struct sk_buff *skb); };
/** @@ -48,5 +46,8 @@ int cpc_hd_add(struct cpc_host_device *cpc_hd); void cpc_hd_put(struct cpc_host_device *cpc_hd); void cpc_hd_del(struct cpc_host_device *cpc_hd); void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, u16 cport_id, u8 *data, size_t length); +void cpc_hd_message_sent(struct sk_buff *skb, int status); + +int cpc_hd_send_skb(struct cpc_host_device *cpc_hd, struct sk_buff *skb);
#endif
Take advantage of the padding bytes present in the Greybus header to store the CPort ID and minize overhead. This technique is already used by the es2 driver.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/cpc.h | 3 +++ drivers/greybus/cpc/cport.c | 29 +++++++++++++++++++++++++++++ drivers/greybus/cpc/host.c | 13 ++++++++++++- drivers/greybus/cpc/host.h | 2 +- 4 files changed, 45 insertions(+), 2 deletions(-)
diff --git a/drivers/greybus/cpc/cpc.h b/drivers/greybus/cpc/cpc.h index d9f8f60913a..62597957814 100644 --- a/drivers/greybus/cpc/cpc.h +++ b/drivers/greybus/cpc/cpc.h @@ -24,6 +24,9 @@ struct cpc_cport { struct cpc_cport *cpc_cport_alloc(u16 cport_id, gfp_t gfp_mask); void cpc_cport_release(struct cpc_cport *cport);
+void cpc_cport_pack(struct gb_operation_msg_hdr *gb_hdr, u16 cport_id); +u16 cpc_cport_unpack(struct gb_operation_msg_hdr *gb_hdr); + int cpc_cport_transmit(struct cpc_cport *cport, struct sk_buff *skb);
struct cpc_skb_cb { diff --git a/drivers/greybus/cpc/cport.c b/drivers/greybus/cpc/cport.c index ed0b8e8b0d7..0fc4ff0c5bb 100644 --- a/drivers/greybus/cpc/cport.c +++ b/drivers/greybus/cpc/cport.c @@ -3,6 +3,9 @@ * Copyright (c) 2025, Silicon Laboratories, Inc. */
+#include <linux/unaligned.h> +#include <linux/skbuff.h> + #include "cpc.h" #include "host.h"
@@ -31,6 +34,27 @@ void cpc_cport_release(struct cpc_cport *cport) kfree(cport); }
+/** + * cpc_cport_pack() - Pack CPort ID into Greybus Operation Message header. + * @gb_hdr: Greybus operation message header. + * @cport_id: CPort ID to pack. + */ +void cpc_cport_pack(struct gb_operation_msg_hdr *gb_hdr, u16 cport_id) +{ + put_unaligned_le16(cport_id, gb_hdr->pad); +} + +/** + * cpc_cport_unpack() - Unpack CPort ID from Greybus Operation Message header. + * @gb_hdr: Greybus operation message header. + * + * Return: CPort ID packed in the header. + */ +u16 cpc_cport_unpack(struct gb_operation_msg_hdr *gb_hdr) +{ + return get_unaligned_le16(gb_hdr->pad); +} + /** * cpc_cport_transmit() - Transmit skb over cport. * @cport: cport. @@ -39,6 +63,11 @@ void cpc_cport_release(struct cpc_cport *cport) int cpc_cport_transmit(struct cpc_cport *cport, struct sk_buff *skb) { struct cpc_host_device *cpc_hd = cport->cpc_hd; + struct gb_operation_msg_hdr *gb_hdr; + + /* Inject cport ID in Greybus header */ + gb_hdr = (struct gb_operation_msg_hdr *)skb->data; + cpc_cport_pack(gb_hdr, cport->id);
return cpc_hd_send_skb(cpc_hd, skb); } diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c index ee090dd3097..b096b639182 100644 --- a/drivers/greybus/cpc/host.c +++ b/drivers/greybus/cpc/host.c @@ -203,8 +203,19 @@ void cpc_hd_message_sent(struct sk_buff *skb, int status) } EXPORT_SYMBOL_GPL(cpc_hd_message_sent);
-void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, u16 cport_id, u8 *data, size_t length) +void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, u8 *data, size_t length) { + struct gb_operation_msg_hdr *gb_hdr; + u16 cport_id; + + /* Prevent an out-of-bound access if called with non-sensical parameters. */ + if (!data || length < sizeof(*gb_hdr)) + return; + + /* Retrieve cport ID that was packed in Greybus header */ + gb_hdr = (struct gb_operation_msg_hdr *)data; + cport_id = cpc_cport_unpack(gb_hdr); + greybus_data_rcvd(cpc_hd->gb_hd, cport_id, data, length); } EXPORT_SYMBOL_GPL(cpc_hd_rcvd); diff --git a/drivers/greybus/cpc/host.h b/drivers/greybus/cpc/host.h index 191b5e394a6..2e568bac44e 100644 --- a/drivers/greybus/cpc/host.h +++ b/drivers/greybus/cpc/host.h @@ -45,7 +45,7 @@ struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct devic int cpc_hd_add(struct cpc_host_device *cpc_hd); void cpc_hd_put(struct cpc_host_device *cpc_hd); void cpc_hd_del(struct cpc_host_device *cpc_hd); -void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, u16 cport_id, u8 *data, size_t length); +void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, u8 *data, size_t length); void cpc_hd_message_sent(struct sk_buff *skb, int status);
int cpc_hd_send_skb(struct cpc_host_device *cpc_hd, struct sk_buff *skb);
On Fri Dec 12, 2025 at 11:12 AM EST, Damien Riégel wrote:
+/**
- cpc_cport_unpack() - Unpack CPort ID from Greybus Operation Message header.
- @gb_hdr: Greybus operation message header.
- Return: CPort ID packed in the header.
- */
+u16 cpc_cport_unpack(struct gb_operation_msg_hdr *gb_hdr) +{
- return get_unaligned_le16(gb_hdr->pad);
+}
We probably want to clear the packing from the `gb_hdr`, just like it is done in the es2 driver.
Thanks,
For symmetry, also convert the RX path to use socket buffers instead of u8* buffers. The main difference is that CPC host device drivers were responsible for allocating and freeing the buffers. Now they are only responsible for allocating the skb and pass it to the upper layer, the CPC "core" module will take of releasing it when it's done with it.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/host.c | 13 ++++++++----- drivers/greybus/cpc/host.h | 2 +- 2 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c index b096b639182..7ffa3bf4021 100644 --- a/drivers/greybus/cpc/host.c +++ b/drivers/greybus/cpc/host.c @@ -203,20 +203,23 @@ void cpc_hd_message_sent(struct sk_buff *skb, int status) } EXPORT_SYMBOL_GPL(cpc_hd_message_sent);
-void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, u8 *data, size_t length) +void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, struct sk_buff *skb) { struct gb_operation_msg_hdr *gb_hdr; u16 cport_id;
/* Prevent an out-of-bound access if called with non-sensical parameters. */ - if (!data || length < sizeof(*gb_hdr)) - return; + if (skb->len < sizeof(*gb_hdr)) + goto free_skb;
/* Retrieve cport ID that was packed in Greybus header */ - gb_hdr = (struct gb_operation_msg_hdr *)data; + gb_hdr = (struct gb_operation_msg_hdr *)skb->data; cport_id = cpc_cport_unpack(gb_hdr);
- greybus_data_rcvd(cpc_hd->gb_hd, cport_id, data, length); + greybus_data_rcvd(cpc_hd->gb_hd, cport_id, skb->data, skb->len); + +free_skb: + kfree_skb(skb); } EXPORT_SYMBOL_GPL(cpc_hd_rcvd);
diff --git a/drivers/greybus/cpc/host.h b/drivers/greybus/cpc/host.h index 2e568bac44e..cc835f5298b 100644 --- a/drivers/greybus/cpc/host.h +++ b/drivers/greybus/cpc/host.h @@ -45,7 +45,7 @@ struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct devic int cpc_hd_add(struct cpc_host_device *cpc_hd); void cpc_hd_put(struct cpc_host_device *cpc_hd); void cpc_hd_del(struct cpc_host_device *cpc_hd); -void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, u8 *data, size_t length); +void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, struct sk_buff *skb); void cpc_hd_message_sent(struct sk_buff *skb, int status);
int cpc_hd_send_skb(struct cpc_host_device *cpc_hd, struct sk_buff *skb);
CPC main features are reliable transmission and remote's receive window management. To implement these features, an additional header is needed. This header is prepended to all Greybus messages.
Reliable transmission: to make transmission reliable, messages are sequenced and acknowledged. That constitutes two bytes of the header, one for the sequence number, one for the acknowledgment number. If a message is not acked in a timely manner, a retransmission mechanism will attempt another transmission. That mechanism will be implemented in a future patch set.
Remote's receive window: the remote advertises the number of reception buffers that are available on this cport. The other peer must take care of not sending more messages than advertised by the remote. This is a sort of flow control. That accounts for one byte in the header.
The remaining byte carries some flags. For instance, there is a flag to indicate if it's a CPC message or a Greybus message.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/header.h | 44 ++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) create mode 100644 drivers/greybus/cpc/header.h
diff --git a/drivers/greybus/cpc/header.h b/drivers/greybus/cpc/header.h new file mode 100644 index 00000000000..afccc941726 --- /dev/null +++ b/drivers/greybus/cpc/header.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#ifndef __CPC_HEADER_H +#define __CPC_HEADER_H + +#include <linux/greybus.h> +#include <linux/types.h> + +#define CPC_HEADER_MAX_RX_WINDOW U8_MAX + +/** + * struct cpc header - Representation of CPC header. + * @ctrl_flags: contains the type of frame and other control flags. + * @recv_wnd: number of buffers that the cport can receive without blocking. + * @seq: sequence number. + * @ack: acknowledge number, indicate to the remote the next sequence number + * this peer expects to see. + * + * Each peer can confirm reception of frames by setting the acknowledgment number to the next frame + * it expects to see, i.e. setting the ack number to X effectively acknowledges frames with sequence + * number up to X-1. + * + * CPC is designed around the concept that each cport has its pool of reception buffers. The number + * of buffers in a pool is advertised to the remote via the @recv_wnd attribute. This acts as + * software flow-control, and a peer shall not send frames to a remote if the @recv_wnd is zero. + * + * The hight-bit (0x80) of the control byte indicates if the frame targets CPC or Greybus. If the + * bit is set, the frame should be interpreted as CPC control frames. For simplicity, control frames + * have the same encoding as Greybus frames. + */ +struct cpc_header { + __u8 ctrl_flags; + __u8 recv_wnd; + __u8 seq; + __u8 ack; +} __packed; + +#define CPC_HEADER_SIZE (sizeof(struct cpc_header)) +#define GREYBUS_HEADER_SIZE (sizeof(struct gb_operation_msg_hdr)) + +#endif
On Fri Dec 12, 2025 at 11:13 AM EST, Damien Riégel wrote:
- The hight-bit (0x80) of the control byte indicates if the frame targets CPC or Greybus. If the
- bit is set, the frame should be interpreted as CPC control frames. For simplicity, control frames
- have the same encoding as Greybus frames.
"The **eighth** bit", "the frame should be interpreted as **a CPC control frame**."
Thanks,
Account that a CPC header is prepended to every frame in the RX and TX path. For now, nothing is done with that headroom but at least bytes are reserved for it.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/host.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c index 7ffa3bf4021..c89617623e8 100644 --- a/drivers/greybus/cpc/host.c +++ b/drivers/greybus/cpc/host.c @@ -9,6 +9,7 @@ #include <linux/skbuff.h>
#include "cpc.h" +#include "header.h" #include "host.h"
static struct cpc_host_device *gb_hd_to_cpc_hd(struct gb_host_device *hd) @@ -48,11 +49,13 @@ static int cpc_hd_message_send(struct cpc_host_device *cpc_hd, u16 cport_id, return -EINVAL; }
- size = sizeof(*message->header) + message->payload_size; + size = sizeof(*message->header) + message->payload_size + CPC_HEADER_SIZE; skb = alloc_skb(size, gfp_mask); if (!skb) return -ENOMEM;
+ skb_reserve(skb, CPC_HEADER_SIZE); + /* Header and payload are already contiguous in Greybus message */ skb_put_data(skb, message->buffer, sizeof(*message->header) + message->payload_size);
@@ -209,9 +212,11 @@ void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, struct sk_buff *skb) u16 cport_id;
/* Prevent an out-of-bound access if called with non-sensical parameters. */ - if (skb->len < sizeof(*gb_hdr)) + if (skb->len < (sizeof(*gb_hdr) + CPC_HEADER_SIZE)) goto free_skb;
+ skb_pull(skb, CPC_HEADER_SIZE); + /* Retrieve cport ID that was packed in Greybus header */ gb_hdr = (struct gb_operation_msg_hdr *)skb->data; cport_id = cpc_cport_unpack(gb_hdr);
The first step in making the CPC header actually do something is to add the sequence number to outgoing messages and validate that incoming frames are received in order.
At this stage, the driver doesn't send standalone acks, so if a message with Sequence X is received, the remote will not be acknowledged until a message targeting that CPort comes from the Greybus layer. Only then the driver will ack with acknowledgedment number of X + 1.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/Makefile | 2 +- drivers/greybus/cpc/cpc.h | 20 +++++++++++++++ drivers/greybus/cpc/cport.c | 25 ++++++++++++++++++ drivers/greybus/cpc/header.c | 17 ++++++++++++ drivers/greybus/cpc/header.h | 2 ++ drivers/greybus/cpc/host.c | 13 +++++++--- drivers/greybus/cpc/protocol.c | 47 ++++++++++++++++++++++++++++++++++ 7 files changed, 121 insertions(+), 5 deletions(-) create mode 100644 drivers/greybus/cpc/header.c create mode 100644 drivers/greybus/cpc/protocol.c
diff --git a/drivers/greybus/cpc/Makefile b/drivers/greybus/cpc/Makefile index 3d50f8c5473..c4b530d27a3 100644 --- a/drivers/greybus/cpc/Makefile +++ b/drivers/greybus/cpc/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0
-gb-cpc-y := cport.o host.o +gb-cpc-y := cport.o header.o host.o protocol.o
# CPC core obj-$(CONFIG_GREYBUS_CPC) += gb-cpc.o diff --git a/drivers/greybus/cpc/cpc.h b/drivers/greybus/cpc/cpc.h index 62597957814..87b54a4fd34 100644 --- a/drivers/greybus/cpc/cpc.h +++ b/drivers/greybus/cpc/cpc.h @@ -8,17 +8,32 @@
#include <linux/device.h> #include <linux/greybus.h> +#include <linux/mutex.h> #include <linux/types.h>
+struct sk_buff; + /** * struct cpc_cport - CPC cport * @id: cport ID * @cpc_hd: pointer to the CPC host device this cport belongs to + * @lock: mutex to synchronize accesses to tcb and other attributes + * @tcb: Transmission Control Block */ struct cpc_cport { u16 id;
struct cpc_host_device *cpc_hd; + struct mutex lock; /* Synchronize access to state variables */ + + /* + * @ack: current acknowledge number + * @seq: current sequence number + */ + struct { + u8 ack; + u8 seq; + } tcb; };
struct cpc_cport *cpc_cport_alloc(u16 cport_id, gfp_t gfp_mask); @@ -34,8 +49,13 @@ struct cpc_skb_cb {
/* Keep track of the GB message the skb originates from */ struct gb_message *gb_message; + + u8 seq; };
#define CPC_SKB_CB(__skb) ((struct cpc_skb_cb *)&((__skb)->cb[0]))
+void cpc_protocol_prepare_header(struct sk_buff *skb, u8 ack); +void cpc_protocol_on_data(struct cpc_cport *cport, struct sk_buff *skb); + #endif diff --git a/drivers/greybus/cpc/cport.c b/drivers/greybus/cpc/cport.c index 0fc4ff0c5bb..2ee0b129996 100644 --- a/drivers/greybus/cpc/cport.c +++ b/drivers/greybus/cpc/cport.c @@ -9,6 +9,16 @@ #include "cpc.h" #include "host.h"
+/** + * cpc_cport_tcb_reset() - Reset cport's TCB to initial values. + * @cport: cport pointer + */ +static void cpc_cport_tcb_reset(struct cpc_cport *cport) +{ + cport->tcb.ack = 0; + cport->tcb.seq = 0; +} + /** * cpc_cport_alloc() - Allocate and initialize CPC cport. * @cport_id: cport ID. @@ -25,6 +35,9 @@ struct cpc_cport *cpc_cport_alloc(u16 cport_id, gfp_t gfp_mask) return NULL;
cport->id = cport_id; + cpc_cport_tcb_reset(cport); + + mutex_init(&cport->lock);
return cport; } @@ -64,10 +77,22 @@ int cpc_cport_transmit(struct cpc_cport *cport, struct sk_buff *skb) { struct cpc_host_device *cpc_hd = cport->cpc_hd; struct gb_operation_msg_hdr *gb_hdr; + u8 ack;
/* Inject cport ID in Greybus header */ gb_hdr = (struct gb_operation_msg_hdr *)skb->data; cpc_cport_pack(gb_hdr, cport->id);
+ mutex_lock(&cport->lock); + + CPC_SKB_CB(skb)->seq = cport->tcb.seq; + + cport->tcb.seq++; + ack = cport->tcb.ack; + + mutex_unlock(&cport->lock); + + cpc_protocol_prepare_header(skb, ack); + return cpc_hd_send_skb(cpc_hd, skb); } diff --git a/drivers/greybus/cpc/header.c b/drivers/greybus/cpc/header.c new file mode 100644 index 00000000000..62946d6077e --- /dev/null +++ b/drivers/greybus/cpc/header.c @@ -0,0 +1,17 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#include "header.h" + +/** + * cpc_header_get_seq() - Get the sequence number. + * @hdr: CPC header. + * + * Return: Sequence number. + */ +u8 cpc_header_get_seq(const struct cpc_header *hdr) +{ + return hdr->seq; +} diff --git a/drivers/greybus/cpc/header.h b/drivers/greybus/cpc/header.h index afccc941726..2a64aa8d278 100644 --- a/drivers/greybus/cpc/header.h +++ b/drivers/greybus/cpc/header.h @@ -41,4 +41,6 @@ struct cpc_header { #define CPC_HEADER_SIZE (sizeof(struct cpc_header)) #define GREYBUS_HEADER_SIZE (sizeof(struct gb_operation_msg_hdr))
+u8 cpc_header_get_seq(const struct cpc_header *hdr); + #endif diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c index c89617623e8..7f0579fde26 100644 --- a/drivers/greybus/cpc/host.c +++ b/drivers/greybus/cpc/host.c @@ -209,19 +209,24 @@ EXPORT_SYMBOL_GPL(cpc_hd_message_sent); void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, struct sk_buff *skb) { struct gb_operation_msg_hdr *gb_hdr; + struct cpc_cport *cport; u16 cport_id;
/* Prevent an out-of-bound access if called with non-sensical parameters. */ if (skb->len < (sizeof(*gb_hdr) + CPC_HEADER_SIZE)) goto free_skb;
- skb_pull(skb, CPC_HEADER_SIZE); - /* Retrieve cport ID that was packed in Greybus header */ - gb_hdr = (struct gb_operation_msg_hdr *)skb->data; + gb_hdr = (struct gb_operation_msg_hdr *)(skb->data + CPC_HEADER_SIZE); cport_id = cpc_cport_unpack(gb_hdr);
- greybus_data_rcvd(cpc_hd->gb_hd, cport_id, skb->data, skb->len); + cport = cpc_hd_get_cport(cpc_hd, cport_id); + if (!cport) { + dev_warn(cpc_hd_dev(cpc_hd), "cport %u not allocated\n", cport_id); + goto free_skb; + } + + cpc_protocol_on_data(cport, skb);
free_skb: kfree_skb(skb); diff --git a/drivers/greybus/cpc/protocol.c b/drivers/greybus/cpc/protocol.c new file mode 100644 index 00000000000..037910e899f --- /dev/null +++ b/drivers/greybus/cpc/protocol.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#include <linux/skbuff.h> + +#include "cpc.h" +#include "header.h" +#include "host.h" + +void cpc_protocol_prepare_header(struct sk_buff *skb, u8 ack) +{ + struct cpc_header *hdr; + + skb_push(skb, sizeof(*hdr)); + + hdr = (struct cpc_header *)skb->data; + hdr->ack = ack; + hdr->recv_wnd = 0; + hdr->ctrl_flags = 0; + hdr->seq = CPC_SKB_CB(skb)->seq; +} + +void cpc_protocol_on_data(struct cpc_cport *cport, struct sk_buff *skb) +{ + struct cpc_header *cpc_hdr = (struct cpc_header *)skb->data; + u8 seq = cpc_header_get_seq(cpc_hdr); + bool expected_seq = false; + + mutex_lock(&cport->lock); + + expected_seq = seq == cport->tcb.ack; + if (expected_seq) + cport->tcb.ack++; + else + dev_warn(cpc_hd_dev(cport->cpc_hd), "unexpected seq: %u, expected seq: %u\n", seq, + cport->tcb.ack); + + mutex_unlock(&cport->lock); + + if (expected_seq) { + skb_pull(skb, CPC_HEADER_SIZE); + + greybus_data_rcvd(cport->cpc_hd->gb_hd, cport->id, skb->data, skb->len); + } +}
On Fri Dec 12, 2025 at 11:13 AM EST, Damien Riégel wrote:
+void cpc_protocol_on_data(struct cpc_cport *cport, struct sk_buff *skb) +{
- [...]
- expected_seq = seq == cport->tcb.ack;
- if (expected_seq)
cport->tcb.ack++;- else
dev_warn(cpc_hd_dev(cport->cpc_hd), "unexpected seq: %u, expected seq: %u\n", seq,cport->tcb.ack);- [...]
+}
This warning can occur somewhat often due to retransmissions. Perhaps we should change it to an `dev_info`.
Thanks,
Currently, CPC doesn't send messages on its own, it only prepends its header to outgoing messages. This can lead to messages not being acknowledged, for instance in the case of an SVC Ping
Host Device
SVC Ping (seq=X, ack=Y) SVC Ping Reply (seq=Y, ack=X+1)
The "Ping Reply" is never acknowledged at the CPC level, which can lead to retransmissions, or worst the device might think the link is broken and do something to recover.
To prevent that scenario, an ack mechanism is implemented in the most straightforward manner: send an ACK to all incoming messages. Here, two flags need to be added: - First, an ACK frame should not be passed to the Greybus layer, so a "CONTROL" flag is added. If this flag is set, it means it's a control messages and should stay at the CPC level. Currently there is only one type of control frame, the standalone ack. Control messages have the same format as Greybus operations. - Second, ack themselves should not be acked, so to determine if a message should be acked or not, a REQUEST_ACK flag is added.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/cpc.h | 3 ++ drivers/greybus/cpc/cport.c | 1 + drivers/greybus/cpc/header.c | 41 +++++++++++++++++++++++++ drivers/greybus/cpc/header.h | 3 ++ drivers/greybus/cpc/protocol.c | 55 +++++++++++++++++++++++++++++----- 5 files changed, 95 insertions(+), 8 deletions(-)
diff --git a/drivers/greybus/cpc/cpc.h b/drivers/greybus/cpc/cpc.h index 87b54a4fd34..725fd7f4afc 100644 --- a/drivers/greybus/cpc/cpc.h +++ b/drivers/greybus/cpc/cpc.h @@ -51,6 +51,9 @@ struct cpc_skb_cb { struct gb_message *gb_message;
u8 seq; + +#define CPC_SKB_FLAG_REQ_ACK (1 << 0) + u8 cpc_flags; };
#define CPC_SKB_CB(__skb) ((struct cpc_skb_cb *)&((__skb)->cb[0])) diff --git a/drivers/greybus/cpc/cport.c b/drivers/greybus/cpc/cport.c index 2ee0b129996..35a148abbed 100644 --- a/drivers/greybus/cpc/cport.c +++ b/drivers/greybus/cpc/cport.c @@ -86,6 +86,7 @@ int cpc_cport_transmit(struct cpc_cport *cport, struct sk_buff *skb) mutex_lock(&cport->lock);
CPC_SKB_CB(skb)->seq = cport->tcb.seq; + CPC_SKB_CB(skb)->cpc_flags = CPC_SKB_FLAG_REQ_ACK;
cport->tcb.seq++; ack = cport->tcb.ack; diff --git a/drivers/greybus/cpc/header.c b/drivers/greybus/cpc/header.c index 62946d6077e..8875a6fed26 100644 --- a/drivers/greybus/cpc/header.c +++ b/drivers/greybus/cpc/header.c @@ -3,8 +3,25 @@ * Copyright (c) 2025, Silicon Laboratories, Inc. */
+#include <linux/bitfield.h> +#include <linux/bits.h> + #include "header.h"
+#define CPC_HEADER_CONTROL_IS_CONTROL_MASK BIT(7) +#define CPC_HEADER_CONTROL_REQ_ACK_MASK BIT(6) + +/** + * cpc_header_is_control() - Identify if this is a control frame. + * @hdr: CPC header. + * + * Return: True if this is a control frame, false if this a Greybus frame. + */ +bool cpc_header_is_control(const struct cpc_header *hdr) +{ + return hdr->ctrl_flags & CPC_HEADER_CONTROL_IS_CONTROL_MASK; +} + /** * cpc_header_get_seq() - Get the sequence number. * @hdr: CPC header. @@ -15,3 +32,27 @@ u8 cpc_header_get_seq(const struct cpc_header *hdr) { return hdr->seq; } + +/** + * cpc_header_get_req_ack() - Get the request acknowledge frame flag. + * @hdr: CPC header. + * + * Return: Request acknowledge frame flag. + */ +bool cpc_header_get_req_ack(const struct cpc_header *hdr) +{ + return FIELD_GET(CPC_HEADER_CONTROL_REQ_ACK_MASK, hdr->ctrl_flags); +} + +/** + * cpc_header_encode_ctrl_flags() - Encode parameters into the control byte. + * @control: True if CPC control frame, false if Greybus frame. + * @req_ack: Frame flag indicating a request to be acknowledged. + * + * Return: Encoded control byte. + */ +u8 cpc_header_encode_ctrl_flags(bool control, bool req_ack) +{ + return FIELD_PREP(CPC_HEADER_CONTROL_IS_CONTROL_MASK, control) | + FIELD_PREP(CPC_HEADER_CONTROL_REQ_ACK_MASK, req_ack); +} diff --git a/drivers/greybus/cpc/header.h b/drivers/greybus/cpc/header.h index 2a64aa8d278..0c9f6e56524 100644 --- a/drivers/greybus/cpc/header.h +++ b/drivers/greybus/cpc/header.h @@ -41,6 +41,9 @@ struct cpc_header { #define CPC_HEADER_SIZE (sizeof(struct cpc_header)) #define GREYBUS_HEADER_SIZE (sizeof(struct gb_operation_msg_hdr))
+bool cpc_header_is_control(const struct cpc_header *hdr); u8 cpc_header_get_seq(const struct cpc_header *hdr); +bool cpc_header_get_req_ack(const struct cpc_header *hdr); +u8 cpc_header_encode_ctrl_flags(bool control, bool req_ack);
#endif diff --git a/drivers/greybus/cpc/protocol.c b/drivers/greybus/cpc/protocol.c index 037910e899f..b4dd4e173a1 100644 --- a/drivers/greybus/cpc/protocol.c +++ b/drivers/greybus/cpc/protocol.c @@ -9,6 +9,11 @@ #include "header.h" #include "host.h"
+static bool cpc_skb_is_sequenced(struct sk_buff *skb) +{ + return CPC_SKB_CB(skb)->cpc_flags & CPC_SKB_FLAG_REQ_ACK; +} + void cpc_protocol_prepare_header(struct sk_buff *skb, u8 ack) { struct cpc_header *hdr; @@ -18,28 +23,62 @@ void cpc_protocol_prepare_header(struct sk_buff *skb, u8 ack) hdr = (struct cpc_header *)skb->data; hdr->ack = ack; hdr->recv_wnd = 0; - hdr->ctrl_flags = 0; hdr->seq = CPC_SKB_CB(skb)->seq; + hdr->ctrl_flags = cpc_header_encode_ctrl_flags(!CPC_SKB_CB(skb)->gb_message, + cpc_skb_is_sequenced(skb)); +} + +static void cpc_protocol_queue_ack(struct cpc_cport *cport, u8 ack) +{ + struct gb_operation_msg_hdr *gb_hdr; + struct sk_buff *skb; + + skb = alloc_skb(CPC_HEADER_SIZE + sizeof(*gb_hdr), GFP_KERNEL); + if (!skb) + return; + + skb_reserve(skb, CPC_HEADER_SIZE); + + gb_hdr = skb_put(skb, sizeof(*gb_hdr)); + memset(gb_hdr, 0, sizeof(*gb_hdr)); + + /* In the CPC Operation Header, only the size and cport_id matter for ACKs. */ + gb_hdr->size = sizeof(*gb_hdr); + cpc_cport_pack(gb_hdr, cport->id); + + cpc_protocol_prepare_header(skb, ack); + + cpc_hd_send_skb(cport->cpc_hd, skb); }
void cpc_protocol_on_data(struct cpc_cport *cport, struct sk_buff *skb) { struct cpc_header *cpc_hdr = (struct cpc_header *)skb->data; + bool require_ack = cpc_header_get_req_ack(cpc_hdr); u8 seq = cpc_header_get_seq(cpc_hdr); bool expected_seq = false; + u8 ack;
mutex_lock(&cport->lock);
- expected_seq = seq == cport->tcb.ack; - if (expected_seq) - cport->tcb.ack++; - else - dev_warn(cpc_hd_dev(cport->cpc_hd), "unexpected seq: %u, expected seq: %u\n", seq, - cport->tcb.ack); + if (require_ack) { + expected_seq = seq == cport->tcb.ack; + if (expected_seq) + cport->tcb.ack++; + else + dev_warn(cpc_hd_dev(cport->cpc_hd), + "unexpected seq: %u, expected seq: %u\n", seq, cport->tcb.ack); + } + + ack = cport->tcb.ack;
mutex_unlock(&cport->lock);
- if (expected_seq) { + /* Ack no matter if the sequence was valid or not, to resync with remote */ + if (require_ack) + cpc_protocol_queue_ack(cport, ack); + + if (expected_seq && !cpc_header_is_control(cpc_hdr)) { skb_pull(skb, CPC_HEADER_SIZE);
greybus_data_rcvd(cport->cpc_hd->gb_hd, cport->id, skb->data, skb->len);
As the first step to handle remote's RX window, store the skb in a sk_buff_head list, instead of sending a message immediately when pushed by Greybus.
skbs are still sent out straight away, but now there is a place to store away if the remote's RX window is too small.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/cpc.h | 10 ++++++---- drivers/greybus/cpc/cport.c | 21 ++++++++++++--------- drivers/greybus/cpc/host.c | 4 +++- drivers/greybus/cpc/protocol.c | 25 ++++++++++++++++++++++++- 4 files changed, 45 insertions(+), 15 deletions(-)
diff --git a/drivers/greybus/cpc/cpc.h b/drivers/greybus/cpc/cpc.h index 725fd7f4afc..f1669585c45 100644 --- a/drivers/greybus/cpc/cpc.h +++ b/drivers/greybus/cpc/cpc.h @@ -9,15 +9,15 @@ #include <linux/device.h> #include <linux/greybus.h> #include <linux/mutex.h> +#include <linux/skbuff.h> #include <linux/types.h>
-struct sk_buff; - /** * struct cpc_cport - CPC cport * @id: cport ID * @cpc_hd: pointer to the CPC host device this cport belongs to * @lock: mutex to synchronize accesses to tcb and other attributes + * @holding_queue: list of frames queued to be sent * @tcb: Transmission Control Block */ struct cpc_cport { @@ -26,6 +26,8 @@ struct cpc_cport { struct cpc_host_device *cpc_hd; struct mutex lock; /* Synchronize access to state variables */
+ struct sk_buff_head holding_queue; + /* * @ack: current acknowledge number * @seq: current sequence number @@ -42,7 +44,7 @@ void cpc_cport_release(struct cpc_cport *cport); void cpc_cport_pack(struct gb_operation_msg_hdr *gb_hdr, u16 cport_id); u16 cpc_cport_unpack(struct gb_operation_msg_hdr *gb_hdr);
-int cpc_cport_transmit(struct cpc_cport *cport, struct sk_buff *skb); +void cpc_cport_transmit(struct cpc_cport *cport, struct sk_buff *skb);
struct cpc_skb_cb { struct cpc_cport *cport; @@ -58,7 +60,7 @@ struct cpc_skb_cb {
#define CPC_SKB_CB(__skb) ((struct cpc_skb_cb *)&((__skb)->cb[0]))
-void cpc_protocol_prepare_header(struct sk_buff *skb, u8 ack); void cpc_protocol_on_data(struct cpc_cport *cport, struct sk_buff *skb); +void __cpc_protocol_write_head(struct cpc_cport *cport);
#endif diff --git a/drivers/greybus/cpc/cport.c b/drivers/greybus/cpc/cport.c index 35a148abbed..f850da7acfb 100644 --- a/drivers/greybus/cpc/cport.c +++ b/drivers/greybus/cpc/cport.c @@ -7,7 +7,6 @@ #include <linux/skbuff.h>
#include "cpc.h" -#include "host.h"
/** * cpc_cport_tcb_reset() - Reset cport's TCB to initial values. @@ -38,15 +37,23 @@ struct cpc_cport *cpc_cport_alloc(u16 cport_id, gfp_t gfp_mask) cpc_cport_tcb_reset(cport);
mutex_init(&cport->lock); + skb_queue_head_init(&cport->holding_queue);
return cport; }
void cpc_cport_release(struct cpc_cport *cport) { + skb_queue_purge(&cport->holding_queue); kfree(cport); }
+static void cpc_cport_queue_skb(struct cpc_cport *cport, struct sk_buff *skb) +{ + __skb_header_release(skb); + __skb_queue_tail(&cport->holding_queue, skb); +} + /** * cpc_cport_pack() - Pack CPort ID into Greybus Operation Message header. * @gb_hdr: Greybus operation message header. @@ -73,11 +80,9 @@ u16 cpc_cport_unpack(struct gb_operation_msg_hdr *gb_hdr) * @cport: cport. * @skb: skb to be transmitted. */ -int cpc_cport_transmit(struct cpc_cport *cport, struct sk_buff *skb) +void cpc_cport_transmit(struct cpc_cport *cport, struct sk_buff *skb) { - struct cpc_host_device *cpc_hd = cport->cpc_hd; struct gb_operation_msg_hdr *gb_hdr; - u8 ack;
/* Inject cport ID in Greybus header */ gb_hdr = (struct gb_operation_msg_hdr *)skb->data; @@ -89,11 +94,9 @@ int cpc_cport_transmit(struct cpc_cport *cport, struct sk_buff *skb) CPC_SKB_CB(skb)->cpc_flags = CPC_SKB_FLAG_REQ_ACK;
cport->tcb.seq++; - ack = cport->tcb.ack; + + cpc_cport_queue_skb(cport, skb); + __cpc_protocol_write_head(cport);
mutex_unlock(&cport->lock); - - cpc_protocol_prepare_header(skb, ack); - - return cpc_hd_send_skb(cpc_hd, skb); } diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c index 7f0579fde26..ec43d33dfc6 100644 --- a/drivers/greybus/cpc/host.c +++ b/drivers/greybus/cpc/host.c @@ -62,7 +62,9 @@ static int cpc_hd_message_send(struct cpc_host_device *cpc_hd, u16 cport_id, CPC_SKB_CB(skb)->cport = cport; CPC_SKB_CB(skb)->gb_message = message;
- return cpc_cport_transmit(cport, skb); + cpc_cport_transmit(cport, skb); + + return 0; }
static int cpc_hd_cport_allocate(struct cpc_host_device *cpc_hd, int cport_id, unsigned long flags) diff --git a/drivers/greybus/cpc/protocol.c b/drivers/greybus/cpc/protocol.c index b4dd4e173a1..ef8ff0cac24 100644 --- a/drivers/greybus/cpc/protocol.c +++ b/drivers/greybus/cpc/protocol.c @@ -14,7 +14,7 @@ static bool cpc_skb_is_sequenced(struct sk_buff *skb) return CPC_SKB_CB(skb)->cpc_flags & CPC_SKB_FLAG_REQ_ACK; }
-void cpc_protocol_prepare_header(struct sk_buff *skb, u8 ack) +static void cpc_protocol_prepare_header(struct sk_buff *skb, u8 ack) { struct cpc_header *hdr;
@@ -84,3 +84,26 @@ void cpc_protocol_on_data(struct cpc_cport *cport, struct sk_buff *skb) greybus_data_rcvd(cport->cpc_hd->gb_hd, cport->id, skb->data, skb->len); } } + +static void __cpc_protocol_write_skb(struct cpc_cport *cport, struct sk_buff *skb, u8 ack) +{ + cpc_protocol_prepare_header(skb, ack); + + cpc_hd_send_skb(cport->cpc_hd, skb); +} + +/* Write skbs at the head of holding queue */ +void __cpc_protocol_write_head(struct cpc_cport *cport) +{ + struct sk_buff *skb; + u8 ack; + + ack = cport->tcb.ack; + + /* For each SKB in the holding queue, clone it and pass it to lower layer */ + while ((skb = skb_peek(&cport->holding_queue))) { + skb_unlink(skb, &cport->holding_queue); + + __cpc_protocol_write_skb(cport, skb, ack); + } +}
The RX window indicates how many reception buffers the peer has available for that cport. The other peer must not send more messages than that window, or the chances of those messages being lost is very high, leading to retransmissions and poor performance.
The RX window is associated with the ack number, and indicates the valid range of sequence number the other peer can use:
Ack RX window Valid Sequence Range
X 0 None X 1 X X 2 X -> X+1
So everytime an ack is received, the driver evaluates if the valid sequence range has changed and if so pops message from its holding queue.
As the skb is moved to another queue, it cannot be passed directly to the lower layer anymore, instead a clone is passed.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/cpc.h | 9 ++++ drivers/greybus/cpc/cport.c | 5 ++ drivers/greybus/cpc/header.c | 88 ++++++++++++++++++++++++++++++++++ drivers/greybus/cpc/header.h | 6 +++ drivers/greybus/cpc/host.c | 9 ---- drivers/greybus/cpc/host.h | 1 - drivers/greybus/cpc/protocol.c | 72 +++++++++++++++++++++++++--- 7 files changed, 173 insertions(+), 17 deletions(-)
diff --git a/drivers/greybus/cpc/cpc.h b/drivers/greybus/cpc/cpc.h index f1669585c45..e8cbe916630 100644 --- a/drivers/greybus/cpc/cpc.h +++ b/drivers/greybus/cpc/cpc.h @@ -18,6 +18,7 @@ * @cpc_hd: pointer to the CPC host device this cport belongs to * @lock: mutex to synchronize accesses to tcb and other attributes * @holding_queue: list of frames queued to be sent + * @retx_queue: list of frames sent and waiting for acknowledgment * @tcb: Transmission Control Block */ struct cpc_cport { @@ -27,12 +28,20 @@ struct cpc_cport { struct mutex lock; /* Synchronize access to state variables */
struct sk_buff_head holding_queue; + struct sk_buff_head retx_queue;
/* + * @send_wnd: send window, maximum number of frames that the remote can accept. + * TX frames should have a sequence in the range [send_una; send_una + send_wnd] + * @send_nxt: send next, the next sequence number that will be used for transmission + * @send_una: send unacknowledged, the oldest unacknowledged sequence number * @ack: current acknowledge number * @seq: current sequence number */ struct { + u8 send_wnd; + u8 send_nxt; + u8 send_una; u8 ack; u8 seq; } tcb; diff --git a/drivers/greybus/cpc/cport.c b/drivers/greybus/cpc/cport.c index f850da7acfb..2f37bd035b6 100644 --- a/drivers/greybus/cpc/cport.c +++ b/drivers/greybus/cpc/cport.c @@ -16,6 +16,9 @@ static void cpc_cport_tcb_reset(struct cpc_cport *cport) { cport->tcb.ack = 0; cport->tcb.seq = 0; + cport->tcb.send_nxt = 0; + cport->tcb.send_una = 0; + cport->tcb.send_wnd = 1; }
/** @@ -38,12 +41,14 @@ struct cpc_cport *cpc_cport_alloc(u16 cport_id, gfp_t gfp_mask)
mutex_init(&cport->lock); skb_queue_head_init(&cport->holding_queue); + skb_queue_head_init(&cport->retx_queue);
return cport; }
void cpc_cport_release(struct cpc_cport *cport) { + skb_queue_purge(&cport->retx_queue); skb_queue_purge(&cport->holding_queue); kfree(cport); } diff --git a/drivers/greybus/cpc/header.c b/drivers/greybus/cpc/header.c index 8875a6fed26..43038f103b5 100644 --- a/drivers/greybus/cpc/header.c +++ b/drivers/greybus/cpc/header.c @@ -22,6 +22,17 @@ bool cpc_header_is_control(const struct cpc_header *hdr) return hdr->ctrl_flags & CPC_HEADER_CONTROL_IS_CONTROL_MASK; }
+/** + * cpc_header_get_recv_wnd() - Get the receive window. + * @hdr: CPC header. + * + * Return: Receive window. + */ +u8 cpc_header_get_recv_wnd(const struct cpc_header *hdr) +{ + return hdr->recv_wnd; +} + /** * cpc_header_get_seq() - Get the sequence number. * @hdr: CPC header. @@ -33,6 +44,17 @@ u8 cpc_header_get_seq(const struct cpc_header *hdr) return hdr->seq; }
+/** + * cpc_header_get_ack() - Get the acknowledge number. + * @hdr: CPC header. + * + * Return: Acknowledge number. + */ +u8 cpc_header_get_ack(const struct cpc_header *hdr) +{ + return hdr->ack; +} + /** * cpc_header_get_req_ack() - Get the request acknowledge frame flag. * @hdr: CPC header. @@ -56,3 +78,69 @@ u8 cpc_header_encode_ctrl_flags(bool control, bool req_ack) return FIELD_PREP(CPC_HEADER_CONTROL_IS_CONTROL_MASK, control) | FIELD_PREP(CPC_HEADER_CONTROL_REQ_ACK_MASK, req_ack); } + +/** + * cpc_header_get_frames_acked_count() - Get frames to be acknowledged. + * @seq: Current sequence number of the endpoint. + * @ack: Acknowledge number of the received frame. + * + * Return: Frames to be acknowledged. + */ +u8 cpc_header_get_frames_acked_count(u8 seq, u8 ack) +{ + u8 frames_acked_count; + + /* Find number of frames acknowledged with ACK number. */ + if (ack > seq) { + frames_acked_count = ack - seq; + } else { + frames_acked_count = 256 - seq; + frames_acked_count += ack; + } + + return frames_acked_count; +} + +/** + * cpc_header_number_in_window() - Test if a number is within a window. + * @start: Start of the window. + * @end: Window size. + * @n: Number to be tested. + * + * Given the start of the window and its size, test if the number is + * in the range [start; start + wnd). + * + * @return True if start <= n <= start + wnd - 1 (modulo 256), otherwise false. + */ +bool cpc_header_number_in_window(u8 start, u8 wnd, u8 n) +{ + u8 end; + + if (wnd == 0) + return false; + + end = start + wnd - 1; + + return cpc_header_number_in_range(start, end, n); +} + +/** + * cpc_header_number_in_range() - Test if a number is between start and end (included). + * @start: Lowest limit. + * @end: Highest limit inclusively. + * @n: Number to be tested. + * + * @return True if start <= n <= end (modulo 256), otherwise false. + */ +bool cpc_header_number_in_range(u8 start, u8 end, u8 n) +{ + if (end >= start) { + if (n < start || n > end) + return false; + } else { + if (n > end && n < start) + return false; + } + + return true; +} diff --git a/drivers/greybus/cpc/header.h b/drivers/greybus/cpc/header.h index 0c9f6e56524..053dc3707d0 100644 --- a/drivers/greybus/cpc/header.h +++ b/drivers/greybus/cpc/header.h @@ -42,8 +42,14 @@ struct cpc_header { #define GREYBUS_HEADER_SIZE (sizeof(struct gb_operation_msg_hdr))
bool cpc_header_is_control(const struct cpc_header *hdr); +u8 cpc_header_get_recv_wnd(const struct cpc_header *hdr); u8 cpc_header_get_seq(const struct cpc_header *hdr); +u8 cpc_header_get_ack(const struct cpc_header *hdr); bool cpc_header_get_req_ack(const struct cpc_header *hdr); u8 cpc_header_encode_ctrl_flags(bool control, bool req_ack);
+u8 cpc_header_get_frames_acked_count(u8 seq, u8 ack); +bool cpc_header_number_in_window(u8 start, u8 wnd, u8 n); +bool cpc_header_number_in_range(u8 start, u8 end, u8 n); + #endif diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c index ec43d33dfc6..a7715c0a960 100644 --- a/drivers/greybus/cpc/host.c +++ b/drivers/greybus/cpc/host.c @@ -199,15 +199,6 @@ void cpc_hd_del(struct cpc_host_device *cpc_hd) } EXPORT_SYMBOL_GPL(cpc_hd_del);
-void cpc_hd_message_sent(struct sk_buff *skb, int status) -{ - struct cpc_host_device *cpc_hd = CPC_SKB_CB(skb)->cport->cpc_hd; - struct gb_host_device *hd = cpc_hd->gb_hd; - - greybus_message_sent(hd, CPC_SKB_CB(skb)->gb_message, status); -} -EXPORT_SYMBOL_GPL(cpc_hd_message_sent); - void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, struct sk_buff *skb) { struct gb_operation_msg_hdr *gb_hdr; diff --git a/drivers/greybus/cpc/host.h b/drivers/greybus/cpc/host.h index cc835f5298b..8f05877b2be 100644 --- a/drivers/greybus/cpc/host.h +++ b/drivers/greybus/cpc/host.h @@ -46,7 +46,6 @@ int cpc_hd_add(struct cpc_host_device *cpc_hd); void cpc_hd_put(struct cpc_host_device *cpc_hd); void cpc_hd_del(struct cpc_host_device *cpc_hd); void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, struct sk_buff *skb); -void cpc_hd_message_sent(struct sk_buff *skb, int status);
int cpc_hd_send_skb(struct cpc_host_device *cpc_hd, struct sk_buff *skb);
diff --git a/drivers/greybus/cpc/protocol.c b/drivers/greybus/cpc/protocol.c index ef8ff0cac24..b5de63d0be8 100644 --- a/drivers/greybus/cpc/protocol.c +++ b/drivers/greybus/cpc/protocol.c @@ -14,7 +14,7 @@ static bool cpc_skb_is_sequenced(struct sk_buff *skb) return CPC_SKB_CB(skb)->cpc_flags & CPC_SKB_FLAG_REQ_ACK; }
-static void cpc_protocol_prepare_header(struct sk_buff *skb, u8 ack) +static void cpc_protocol_prepare_header(struct sk_buff *skb, u8 ack, u8 recv_window) { struct cpc_header *hdr;
@@ -22,7 +22,7 @@ static void cpc_protocol_prepare_header(struct sk_buff *skb, u8 ack)
hdr = (struct cpc_header *)skb->data; hdr->ack = ack; - hdr->recv_wnd = 0; + hdr->recv_wnd = recv_window; hdr->seq = CPC_SKB_CB(skb)->seq; hdr->ctrl_flags = cpc_header_encode_ctrl_flags(!CPC_SKB_CB(skb)->gb_message, cpc_skb_is_sequenced(skb)); @@ -46,11 +46,47 @@ static void cpc_protocol_queue_ack(struct cpc_cport *cport, u8 ack) gb_hdr->size = sizeof(*gb_hdr); cpc_cport_pack(gb_hdr, cport->id);
- cpc_protocol_prepare_header(skb, ack); + cpc_protocol_prepare_header(skb, ack, CPC_HEADER_MAX_RX_WINDOW);
cpc_hd_send_skb(cport->cpc_hd, skb); }
+static void __cpc_protocol_receive_ack(struct cpc_cport *cport, u8 recv_wnd, u8 ack) +{ + struct gb_host_device *gb_hd = cport->cpc_hd->gb_hd; + struct sk_buff *skb; + u8 acked_frames; + + cport->tcb.send_wnd = recv_wnd; + + skb = skb_peek(&cport->retx_queue); + if (!skb) + return; + + /* Return if no frame to ACK. */ + if (!cpc_header_number_in_range(cport->tcb.send_una, cport->tcb.send_nxt, ack)) + return; + + /* Calculate how many frames will be ACK'd. */ + acked_frames = cpc_header_get_frames_acked_count(CPC_SKB_CB(skb)->seq, ack); + + for (u8 i = 0; i < acked_frames; i++) { + skb = skb_dequeue(&cport->retx_queue); + if (!skb) { + dev_err_ratelimited(cpc_hd_dev(cport->cpc_hd), + "pending ack queue shorter than expected"); + break; + } + + if (CPC_SKB_CB(skb)->gb_message) + greybus_message_sent(gb_hd, CPC_SKB_CB(skb)->gb_message, 0); + + kfree_skb(skb); + + cport->tcb.send_una++; + } +} + void cpc_protocol_on_data(struct cpc_cport *cport, struct sk_buff *skb) { struct cpc_header *cpc_hdr = (struct cpc_header *)skb->data; @@ -61,6 +97,9 @@ void cpc_protocol_on_data(struct cpc_cport *cport, struct sk_buff *skb)
mutex_lock(&cport->lock);
+ __cpc_protocol_receive_ack(cport, cpc_header_get_recv_wnd(cpc_hdr), + cpc_header_get_ack(cpc_hdr)); + if (require_ack) { expected_seq = seq == cport->tcb.ack; if (expected_seq) @@ -72,6 +111,8 @@ void cpc_protocol_on_data(struct cpc_cport *cport, struct sk_buff *skb)
ack = cport->tcb.ack;
+ __cpc_protocol_write_head(cport); + mutex_unlock(&cport->lock);
/* Ack no matter if the sequence was valid or not, to resync with remote */ @@ -85,9 +126,10 @@ void cpc_protocol_on_data(struct cpc_cport *cport, struct sk_buff *skb) } }
-static void __cpc_protocol_write_skb(struct cpc_cport *cport, struct sk_buff *skb, u8 ack) +static void __cpc_protocol_write_skb(struct cpc_cport *cport, struct sk_buff *skb, u8 ack, + u8 recv_window) { - cpc_protocol_prepare_header(skb, ack); + cpc_protocol_prepare_header(skb, ack, recv_window);
cpc_hd_send_skb(cport->cpc_hd, skb); } @@ -96,14 +138,30 @@ static void __cpc_protocol_write_skb(struct cpc_cport *cport, struct sk_buff *sk void __cpc_protocol_write_head(struct cpc_cport *cport) { struct sk_buff *skb; - u8 ack; + u8 ack, send_una, send_wnd;
ack = cport->tcb.ack; + send_una = cport->tcb.send_una; + send_wnd = cport->tcb.send_wnd;
/* For each SKB in the holding queue, clone it and pass it to lower layer */ while ((skb = skb_peek(&cport->holding_queue))) { + struct sk_buff *out_skb; + + /* Skip this skb if it must be acked but the remote has no room for it. */ + if (!cpc_header_number_in_window(send_una, send_wnd, CPC_SKB_CB(skb)->seq)) + break; + + /* Clone and send out the skb */ + out_skb = skb_clone(skb, GFP_KERNEL); + if (!out_skb) + return; + skb_unlink(skb, &cport->holding_queue);
- __cpc_protocol_write_skb(cport, skb, ack); + __cpc_protocol_write_skb(cport, out_skb, ack, CPC_HEADER_MAX_RX_WINDOW); + + cport->tcb.send_nxt++; + skb_queue_tail(&cport->retx_queue, skb); } }
On Fri Dec 12, 2025 at 11:13 AM EST, Damien Riégel wrote:
+u8 cpc_header_get_frames_acked_count(u8 seq, u8 ack) +{
- u8 frames_acked_count;
- /* Find number of frames acknowledged with ACK number. */
- if (ack > seq) {
frames_acked_count = ack - seq;- } else {
frames_acked_count = 256 - seq;frames_acked_count += ack;- }
- return frames_acked_count;
+}
[nit] As stated in the RFC, this can be simplified to `return ack - seq;`, since both `ack`, `seq` and the return value are `u8`.
Thanks,
This lets the CPC host device drivers dequeue frames when it's convenient for them to do so, instead of forcing each to them to implement a queue to store pending skbs.
The callback is changed from `transmit` to `wake_tx` and let CPC core notify these drivers when there is something to transmit.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/host.c | 74 ++++++++++++++++++++++++++++++++++++-- drivers/greybus/cpc/host.h | 12 +++++-- 2 files changed, 81 insertions(+), 5 deletions(-)
diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c index a7715c0a960..54f0b07efec 100644 --- a/drivers/greybus/cpc/host.c +++ b/drivers/greybus/cpc/host.c @@ -155,6 +155,7 @@ static struct gb_hd_driver cpc_gb_driver = { static void cpc_hd_init(struct cpc_host_device *cpc_hd) { mutex_init(&cpc_hd->lock); + skb_queue_head_init(&cpc_hd->tx_queue); }
struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct device *parent) @@ -162,7 +163,7 @@ struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct devic struct cpc_host_device *cpc_hd; struct gb_host_device *hd;
- if (!driver->transmit) { + if (!driver->wake_tx) { dev_err(parent, "missing mandatory callback\n"); return ERR_PTR(-EINVAL); } @@ -231,13 +232,80 @@ EXPORT_SYMBOL_GPL(cpc_hd_rcvd); * @cpc_hd: Host device to send SKB over. * @skb: SKB to send. */ -int cpc_hd_send_skb(struct cpc_host_device *cpc_hd, struct sk_buff *skb) +void cpc_hd_send_skb(struct cpc_host_device *cpc_hd, struct sk_buff *skb) { const struct cpc_hd_driver *drv = cpc_hd->driver;
- return drv->transmit(cpc_hd, skb); + mutex_lock(&cpc_hd->lock); + skb_queue_tail(&cpc_hd->tx_queue, skb); + mutex_unlock(&cpc_hd->lock); + + drv->wake_tx(cpc_hd); }
+/** + * cpc_hd_tx_queue_empty() - Check if transmit queue is empty. + * @cpc_hd: CPC Host Device. + * + * Return: True if transmit queue is empty, false otherwise. + */ +bool cpc_hd_tx_queue_empty(struct cpc_host_device *cpc_hd) +{ + bool empty; + + mutex_lock(&cpc_hd->lock); + empty = skb_queue_empty(&cpc_hd->tx_queue); + mutex_unlock(&cpc_hd->lock); + + return empty; +} +EXPORT_SYMBOL_GPL(cpc_hd_tx_queue_empty); + +/** + * cpc_hd_dequeue() - Get the next SKB that was queued for transmission. + * @cpc_hd: CPC Host Device. + * + * Get an SKB that was previously queued by cpc_hd_send_skb(). + * + * Return: An SKB, or %NULL if queue was empty. + */ +struct sk_buff *cpc_hd_dequeue(struct cpc_host_device *cpc_hd) +{ + struct sk_buff *skb; + + mutex_lock(&cpc_hd->lock); + skb = skb_dequeue(&cpc_hd->tx_queue); + mutex_unlock(&cpc_hd->lock); + + return skb; +} +EXPORT_SYMBOL_GPL(cpc_hd_dequeue); + +/** + * cpc_hd_dequeue_many() - Get the next max_frames SKBs that were queued for transmission. + * @cpc_hd: CPC host device. + * @frame_list: Caller-provided sk_buff_head to fill with dequeued frames. + * @max_frames: Maximum number of frames to dequeue. + * + * Return: Number of frames actually dequeued. + */ +u32 cpc_hd_dequeue_many(struct cpc_host_device *cpc_hd, struct sk_buff_head *frame_list, + unsigned int max_frames) +{ + struct sk_buff *skb; + unsigned int count = 0; + + mutex_lock(&cpc_hd->lock); + while (count < max_frames && (skb = skb_dequeue(&cpc_hd->tx_queue))) { + skb_queue_tail(frame_list, skb); + count++; + } + mutex_unlock(&cpc_hd->lock); + + return count; +} +EXPORT_SYMBOL_GPL(cpc_hd_dequeue_many); + MODULE_DESCRIPTION("Greybus over CPC"); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Silicon Laboratories, Inc."); diff --git a/drivers/greybus/cpc/host.h b/drivers/greybus/cpc/host.h index 8f05877b2be..ee6a86de309 100644 --- a/drivers/greybus/cpc/host.h +++ b/drivers/greybus/cpc/host.h @@ -9,6 +9,7 @@ #include <linux/device.h> #include <linux/greybus.h> #include <linux/mutex.h> +#include <linux/skbuff.h> #include <linux/types.h>
#define GB_CPC_MSG_SIZE_MAX 4096 @@ -18,7 +19,7 @@ struct cpc_cport; struct cpc_host_device;
struct cpc_hd_driver { - int (*transmit)(struct cpc_host_device *hd, struct sk_buff *skb); + int (*wake_tx)(struct cpc_host_device *cpc_hd); };
/** @@ -34,6 +35,8 @@ struct cpc_host_device {
struct mutex lock; /* Synchronize access to cports */ struct cpc_cport *cports[GB_CPC_NUM_CPORTS]; + + struct sk_buff_head tx_queue; };
static inline struct device *cpc_hd_dev(struct cpc_host_device *cpc_hd) @@ -47,6 +50,11 @@ void cpc_hd_put(struct cpc_host_device *cpc_hd); void cpc_hd_del(struct cpc_host_device *cpc_hd); void cpc_hd_rcvd(struct cpc_host_device *cpc_hd, struct sk_buff *skb);
-int cpc_hd_send_skb(struct cpc_host_device *cpc_hd, struct sk_buff *skb); +void cpc_hd_send_skb(struct cpc_host_device *cpc_hd, struct sk_buff *skb); + +bool cpc_hd_tx_queue_empty(struct cpc_host_device *cpc_hd); +struct sk_buff *cpc_hd_dequeue(struct cpc_host_device *cpc_hd); +u32 cpc_hd_dequeue_many(struct cpc_host_device *cpc_hd, struct sk_buff_head *frame_list, + unsigned int max_frames);
#endif
Add a private data pointer when creating a CPC Host Device. This lets the host device drivers get back their context more easily in the callbacks.
Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/host.c | 4 +++- drivers/greybus/cpc/host.h | 5 ++++- 2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/greybus/cpc/host.c b/drivers/greybus/cpc/host.c index 54f0b07efec..2784207279e 100644 --- a/drivers/greybus/cpc/host.c +++ b/drivers/greybus/cpc/host.c @@ -158,7 +158,8 @@ static void cpc_hd_init(struct cpc_host_device *cpc_hd) skb_queue_head_init(&cpc_hd->tx_queue); }
-struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct device *parent) +struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct device *parent, + void *priv) { struct cpc_host_device *cpc_hd; struct gb_host_device *hd; @@ -175,6 +176,7 @@ struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct devic cpc_hd = gb_hd_to_cpc_hd(hd); cpc_hd->gb_hd = hd; cpc_hd->driver = driver; + cpc_hd->priv = priv;
cpc_hd_init(cpc_hd);
diff --git a/drivers/greybus/cpc/host.h b/drivers/greybus/cpc/host.h index ee6a86de309..4bb7339b394 100644 --- a/drivers/greybus/cpc/host.h +++ b/drivers/greybus/cpc/host.h @@ -37,6 +37,8 @@ struct cpc_host_device { struct cpc_cport *cports[GB_CPC_NUM_CPORTS];
struct sk_buff_head tx_queue; + + void *priv; };
static inline struct device *cpc_hd_dev(struct cpc_host_device *cpc_hd) @@ -44,7 +46,8 @@ static inline struct device *cpc_hd_dev(struct cpc_host_device *cpc_hd) return &cpc_hd->gb_hd->dev; }
-struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct device *parent); +struct cpc_host_device *cpc_hd_create(struct cpc_hd_driver *driver, struct device *parent, + void *priv); int cpc_hd_add(struct cpc_host_device *cpc_hd); void cpc_hd_put(struct cpc_host_device *cpc_hd); void cpc_hd_del(struct cpc_host_device *cpc_hd);
From: Gabriel Beaulieu gabriel.beaulieu@silabs.com
This introduces a new module gb-cpc-sdio, in order to communicate with a Greybus CPC device over SDIO.
Most of the complexity stems from aggregation: packets are aggregated to minimize the number of CMD53s. In the first block, the first le32 is the number of packets in this transfer. Immediately after that are all the packet headers (CPC + Greybus). This lets the device process all the headers in a single interrupt, and prepare the ADMA descriptors for all the payloads in one go.
Payloads start at the beginning of the second block and are concatained. Their lengths must be 32-bit aligned, so the driver takes care of adding and removing padding if necessary.
Signed-off-by: Gabriel Beaulieu gabriel.beaulieu@silabs.com Signed-off-by: Damien Riégel damien.riegel@silabs.com --- drivers/greybus/cpc/Kconfig | 12 + drivers/greybus/cpc/Makefile | 3 + drivers/greybus/cpc/sdio.c | 554 +++++++++++++++++++++++++++++++++++ 3 files changed, 569 insertions(+) create mode 100644 drivers/greybus/cpc/sdio.c
diff --git a/drivers/greybus/cpc/Kconfig b/drivers/greybus/cpc/Kconfig index ab96fedd0de..8223f56795f 100644 --- a/drivers/greybus/cpc/Kconfig +++ b/drivers/greybus/cpc/Kconfig @@ -8,3 +8,15 @@ config GREYBUS_CPC
To compile this code as a module, chose M here: the module will be called gb-cpc.ko + +config GREYBUS_CPC_SDIO + tristate "Greybus CPC over SDIO" + depends on GREYBUS_CPC && MMC + help + This driver provides Greybus CPC host support for devices + connected via SDIO interface. + + To compile this driver as a module, choose M here: the module + will be called gb-cpc-sdio. + + If unsure, say N. diff --git a/drivers/greybus/cpc/Makefile b/drivers/greybus/cpc/Makefile index c4b530d27a3..3296536e86d 100644 --- a/drivers/greybus/cpc/Makefile +++ b/drivers/greybus/cpc/Makefile @@ -4,3 +4,6 @@ gb-cpc-y := cport.o header.o host.o protocol.o
# CPC core obj-$(CONFIG_GREYBUS_CPC) += gb-cpc.o + +gb-cpc-sdio-y := sdio.o +obj-$(CONFIG_GREYBUS_CPC_SDIO) += gb-cpc-sdio.o diff --git a/drivers/greybus/cpc/sdio.c b/drivers/greybus/cpc/sdio.c new file mode 100644 index 00000000000..5c467b83ad9 --- /dev/null +++ b/drivers/greybus/cpc/sdio.c @@ -0,0 +1,554 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025, Silicon Laboratories, Inc. + */ + +#include <linux/atomic.h> +#include <linux/bitops.h> +#include <linux/container_of.h> +#include <linux/delay.h> +#include <linux/device.h> +#include <linux/kthread.h> +#include <linux/minmax.h> +#include <linux/mmc/sdio_func.h> +#include <linux/mmc/sdio_ids.h> +#include <linux/skbuff.h> +#include <linux/slab.h> +#include <linux/wait.h> +#include <linux/workqueue.h> + +#include "cpc.h" +#include "header.h" +#include "host.h" + +#define GB_CPC_SDIO_MSG_SIZE_MAX 4096 +#define GB_CPC_SDIO_BLOCK_SIZE 256U +#define GB_CPC_SDIO_FIFO_ADDR 0 +#define GB_CPC_SDIO_ALIGNMENT 4 +#define GB_CPC_SDIO_DEFAULT_AGGREGATION 1 +#define CPC_FRAME_HEADER_SIZE (CPC_HEADER_SIZE + GREYBUS_HEADER_SIZE) +#define GB_CPC_SDIO_MAX_AGGREGATION ((GB_CPC_SDIO_BLOCK_SIZE - sizeof(u32)) / CPC_FRAME_HEADER_SIZE) + +enum cpc_sdio_flags { + CPC_SDIO_FLAG_IRQ_RUNNING, + CPC_SDIO_FLAG_TX_WORK_DELAYED, + CPC_SDIO_FLAG_SHUTDOWN, +}; + +struct cpc_sdio { + struct cpc_host_device *cpc_hd; + struct device *dev; + struct sdio_func *func; + + struct work_struct tx_work; + unsigned long flags; + + wait_queue_head_t event_queue; + u8 max_aggregation; +}; + +struct frame_header { + struct cpc_header cpc; + struct gb_operation_msg_hdr gb; +} __packed; + +static inline struct cpc_sdio *cpc_hd_to_cpc_sdio(struct cpc_host_device *cpc_hd) +{ + return (struct cpc_sdio *)cpc_hd->priv; +} + +static int gb_cpc_sdio_wake_tx(struct cpc_host_device *cpc_hd) +{ + struct cpc_sdio *ctx = cpc_hd_to_cpc_sdio(cpc_hd); + + if (test_bit(CPC_SDIO_FLAG_SHUTDOWN, &ctx->flags)) + return 0; + + /* Use system workqueue for TX processing */ + schedule_work(&ctx->tx_work); + + return 0; +} + +/** + * Return the memory requirement in bytes for the aggregated frame aligned to the block size + */ +static size_t cpc_sdio_get_aligned_size(struct cpc_sdio *ctx, struct sk_buff_head *frame_list) +{ + size_t size = 0; + struct sk_buff *frame; + + /* Calculate total payload size */ + skb_queue_walk(frame_list, frame) { + size_t payload_len = frame->len - CPC_FRAME_HEADER_SIZE; + + /* payload is aligned and padded to 4 bytes */ + size += ALIGN(payload_len, GB_CPC_SDIO_ALIGNMENT); + } + + /* Make sure the total payload size is a round number of blocks */ + size = ALIGN(size, GB_CPC_SDIO_BLOCK_SIZE); + + /* Add an additional block for headers */ + size += GB_CPC_SDIO_BLOCK_SIZE; + + return size; +} + +static unsigned char *cpc_sdio_build_aggregated_frame(struct cpc_sdio *ctx, + struct sk_buff_head *frame_list, + size_t *xfer_len) +{ + unsigned char *tx_buff; + struct sk_buff *frame; + __le32 *frame_count; + size_t xfer_size; + unsigned int i = 0; + + xfer_size = cpc_sdio_get_aligned_size(ctx, frame_list); + + /* Allocate aggregated frame */ + tx_buff = kmalloc(xfer_size, GFP_KERNEL); + if (!tx_buff) + return NULL; + + frame_count = (__le32 *)tx_buff; + *frame_count = cpu_to_le32(skb_queue_len(frame_list)); + i += 4; + + /* Copy frame headers to aggregate buffer */ + skb_queue_walk(frame_list, frame) { + memcpy(&tx_buff[i], frame->data, CPC_FRAME_HEADER_SIZE); + i += CPC_FRAME_HEADER_SIZE; + } + + /* Zero-pad remainder of header block to fill complete SDIO block */ + if (i < GB_CPC_SDIO_BLOCK_SIZE) + memset(&tx_buff[i], 0, GB_CPC_SDIO_BLOCK_SIZE - i); + + /* Start injecting payload at beginning of second block */ + i = GB_CPC_SDIO_BLOCK_SIZE; + + /* Build payload blocks if required */ + if (xfer_size > GB_CPC_SDIO_BLOCK_SIZE) { + skb_queue_walk(frame_list, frame) { + size_t payload_len, padding_len; + + if (frame->len <= CPC_FRAME_HEADER_SIZE) + continue; + + payload_len = frame->len - CPC_FRAME_HEADER_SIZE; + memcpy(&tx_buff[i], &frame->data[CPC_FRAME_HEADER_SIZE], payload_len); + i += payload_len; + + padding_len = ALIGN(payload_len, GB_CPC_SDIO_ALIGNMENT) - payload_len; + if (padding_len) { + memset(&tx_buff[i], 0, padding_len); + i += padding_len; + } + } + } + + *xfer_len = xfer_size; + + return tx_buff; +} + +static bool cpc_sdio_get_payload_size(struct cpc_sdio *ctx, const struct frame_header *header, + size_t *payload_size) +{ + size_t gb_size; + + gb_size = le16_to_cpu(header->gb.size); + + /* Validate that the size is at least as large as the Greybus header */ + if (gb_size < GREYBUS_HEADER_SIZE) { + dev_dbg(ctx->dev, "Invalid Greybus header size: %lu\n", gb_size); + return false; + } + + /* Validate maximum size */ + if (gb_size > (GB_CPC_SDIO_MSG_SIZE_MAX + GREYBUS_HEADER_SIZE)) { + dev_dbg(ctx->dev, "Payload size exceeds maximum: %lu\n", gb_size); + return false; + } + + /* Size includes the Greybus header, so subtract it to get payload size */ + *payload_size = gb_size - GREYBUS_HEADER_SIZE; + + return true; +} + +/** + * Process aggregated frame + * Reconstructed frame layout: + * +-----+-----+-----+------+------+------+------+-------+---------+ + * | CPC Header (4B) | Size | OpID | Type | Stat | CPort | Payload | + * +-----+-----+-----+------+------+------+------+-------+---------+ + */ +static int cpc_sdio_process_aggregated_frame(struct cpc_sdio *ctx, unsigned char *aggregated_frame, + unsigned int frame_len) +{ + const struct frame_header *header; + size_t aligned_payload_size; + struct sk_buff *rx_skb; + const unsigned char *payload_start; + size_t payload_size; + size_t frame_size; + u32 frame_count; + unsigned int payload_offset; + + /* Get frame count from aggregated frame (4-byte u32) */ + frame_count = le32_to_cpu(*((__le32 *)aggregated_frame)); + if (frame_count == 0) { + dev_dbg(ctx->dev, "Process aggregated frame: invalid frame count: %u\n", + frame_count); + return -EINVAL; + } + + /* Ensure frame count doesn't exceed our negotiated maximum */ + if (frame_count > ctx->max_aggregation) { + dev_warn(ctx->dev, + "Process aggregated frame: frame count %u exceeds negotiated maximum %u\n", + frame_count, ctx->max_aggregation); + //frame_count = ctx->effective_max_aggregation; + } + + /* Header starts at block 0 after frame count */ + header = (struct frame_header *)&aggregated_frame[sizeof(__le32)]; + + /* Payloads start at block 1 (after complete block 0) */ + payload_offset = GB_CPC_SDIO_BLOCK_SIZE; + + for (unsigned int i = 0; i < frame_count; i++) { + payload_start = &aggregated_frame[payload_offset]; + + /* Get payload size for this frame */ + if (!cpc_sdio_get_payload_size(ctx, header, &payload_size)) { + dev_err(ctx->dev, + "Process aggregated frame: failed to get payload size, aborting.\n"); + return -ERANGE; + } + + aligned_payload_size = ALIGN(payload_size, GB_CPC_SDIO_ALIGNMENT); + + /* Validate the payload is within the buffer boundary */ + if (payload_offset + aligned_payload_size > frame_len) { + dev_err(ctx->dev, + "Process aggregated frame: payload is out of buffer boundary, aborting at frame %u\n", + i); + return -ENOSPC; + } + + /* Calculate total frame size: CPC header + Greybus header + payload */ + frame_size = CPC_FRAME_HEADER_SIZE + payload_size; + + /* Allocate sk_buff for reconstructed frame */ + rx_skb = alloc_skb(frame_size, GFP_KERNEL); + if (rx_skb) { + /* Copy header */ + memcpy(skb_put(rx_skb, CPC_FRAME_HEADER_SIZE), header, + CPC_FRAME_HEADER_SIZE); + + /* Copy payload */ + if (payload_size > 0) + memcpy(skb_put(rx_skb, payload_size), payload_start, payload_size); + + /* Send reconstructed frame to CPC core */ + cpc_hd_rcvd(ctx->cpc_hd, rx_skb); + } + /* else: allocation failed, skip this frame but continue processing */ + + /* Move to next header and payload start address */ + header++; + payload_offset += aligned_payload_size; + } + + return 0; +} + +static u32 cpc_sdio_get_rx_num_bytes(struct sdio_func *func, int *err) +{ + unsigned int rx_num_block_addr = 0x0C; + + return sdio_readl(func, rx_num_block_addr, err); +} + +static void gb_cpc_sdio_rx(struct cpc_sdio *ctx) +{ + unsigned char *rx_buff; + size_t data_len; + int err; + + sdio_claim_host(ctx->func); + data_len = cpc_sdio_get_rx_num_bytes(ctx->func, &err); + + if (err) { + dev_err(ctx->dev, "failed to obtain byte count (%d)\n", err); + goto release_host; + } + + /* Validate minimum RX data length */ + if (data_len < sizeof(u32) + CPC_FRAME_HEADER_SIZE) { + dev_err(ctx->dev, "failed to obtain enough bytes for a header (%zu)\n", data_len); + goto release_host; + } + + /* Allocate sk_buff for RX data */ + rx_buff = kmalloc(data_len, GFP_KERNEL); + if (!rx_buff) + goto release_host; + + err = sdio_readsb(ctx->func, rx_buff, GB_CPC_SDIO_FIFO_ADDR, data_len); + sdio_release_host(ctx->func); + + if (err) { + dev_err(ctx->dev, "failed to sdio_readsb (%d)\n", err); + goto free_rx_skb; + } + + if (data_len < GB_CPC_SDIO_BLOCK_SIZE) { + dev_err(ctx->dev, "received %zd bytes, expected at least %u bytes\n", data_len, + GB_CPC_SDIO_BLOCK_SIZE); + goto free_rx_skb; + } + + /* de-aggregate incoming skb into individual frames and send to CPC core */ + cpc_sdio_process_aggregated_frame(ctx, rx_buff, data_len); + +free_rx_skb: + kfree(rx_buff); + + return; + +release_host: + sdio_release_host(ctx->func); +} + +static void gb_cpc_sdio_tx(struct cpc_sdio *ctx) +{ + struct sk_buff_head frame_list; + unsigned char *tx_buff; + size_t tx_len; + int err; + + skb_queue_head_init(&frame_list); + + /* Dequeue the negotiated maximum aggregated frames from the host device */ + cpc_hd_dequeue_many(ctx->cpc_hd, &frame_list, ctx->max_aggregation); + + /* Check if any frames were dequeued */ + if (skb_queue_empty(&frame_list)) + return; + + tx_buff = cpc_sdio_build_aggregated_frame(ctx, &frame_list, &tx_len); + if (!tx_buff) { + dev_err(ctx->dev, "failed to build aggregated frame\n"); + goto cleanup_frames; + } + + sdio_claim_host(ctx->func); + err = sdio_writesb(ctx->func, GB_CPC_SDIO_FIFO_ADDR, tx_buff, tx_len); + sdio_release_host(ctx->func); + + if (err) + dev_err(ctx->dev, "failed to sdio_writesb (%d)\n", err); + + kfree(tx_buff); + +cleanup_frames: + /* Clean up any remaining frames in the list */ + skb_queue_purge(&frame_list); +} + +static void gb_cpc_sdio_rx_tx(struct cpc_sdio *ctx) +{ + gb_cpc_sdio_rx(ctx); + + set_bit(CPC_SDIO_FLAG_IRQ_RUNNING, &ctx->flags); + gb_cpc_sdio_tx(ctx); + clear_bit(CPC_SDIO_FLAG_IRQ_RUNNING, &ctx->flags); +} + +static void gb_cpc_sdio_tx_work(struct work_struct *work) +{ + struct cpc_sdio *ctx = container_of(work, struct cpc_sdio, tx_work); + + /* Do not execute concurrently to the interrupt */ + if (test_bit(CPC_SDIO_FLAG_IRQ_RUNNING, &ctx->flags)) { + set_bit(CPC_SDIO_FLAG_TX_WORK_DELAYED, &ctx->flags); + return; + } + + gb_cpc_sdio_tx(ctx); +} + +static struct cpc_hd_driver cpc_sdio_driver = { + .wake_tx = gb_cpc_sdio_wake_tx, +}; + +static int cpc_sdio_init(struct sdio_func *func) +{ + unsigned char rx_data_ready_irq_en_bit = BIT(0); + unsigned int irq_enable_addr = 0x09; + int err; + + /* Enable the read data ready interrupt. */ + sdio_writeb(func, rx_data_ready_irq_en_bit, irq_enable_addr, &err); + if (err) + dev_err(&func->dev, "failed to set data ready interrupt (%d)\n", err); + + return err; +} + +static void cpc_sdio_irq_handler(struct sdio_func *func) +{ + struct cpc_sdio *ctx = sdio_get_drvdata(func); + struct device *dev = &func->dev; + unsigned int rx_data_pending_irq_bit = 0; + unsigned int irq_status_addr = 0x08; + unsigned long int_status; + int err; + + int_status = sdio_readb(func, irq_status_addr, &err); + if (err) { + dev_err(dev, "failed to read interrupt status registers (%d)\n", err); + return; + } + + if (__test_and_clear_bit(rx_data_pending_irq_bit, &int_status)) { + /* Clear the IRQ on the device side. */ + sdio_writeb(func, BIT(rx_data_pending_irq_bit), irq_status_addr, &err); + if (err) { + dev_err(dev, "failed to clear read interrupt (%d), interrupt will repeat\n", + err); + return; + } + + cancel_work_sync(&ctx->tx_work); + gb_cpc_sdio_rx_tx(ctx); + + if (test_and_clear_bit(CPC_SDIO_FLAG_TX_WORK_DELAYED, &ctx->flags)) + schedule_work(&ctx->tx_work); + } + + if (int_status) + dev_err_once(dev, "unhandled interrupt from the device (%ld)\n", int_status); +} + +static int cpc_sdio_probe(struct sdio_func *func, const struct sdio_device_id *id) +{ + struct cpc_host_device *cpc_hd; + struct cpc_sdio *ctx; + int err; + + /* Allocate our private context */ + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + /* Create CPC host device with our context as private data */ + cpc_hd = cpc_hd_create(&cpc_sdio_driver, &func->dev, ctx); + if (IS_ERR(cpc_hd)) { + kfree(ctx); + return PTR_ERR(cpc_hd); + } + + /* Initialize context */ + ctx->cpc_hd = cpc_hd; + ctx->dev = cpc_hd_dev(cpc_hd); + ctx->func = func; + ctx->max_aggregation = GB_CPC_SDIO_DEFAULT_AGGREGATION; + + INIT_WORK(&ctx->tx_work, gb_cpc_sdio_tx_work); + + /* Make ctx available to IRQ handler before enabling/claiming IRQ */ + sdio_set_drvdata(func, ctx); + + sdio_claim_host(func); + + err = sdio_enable_func(func); + if (err) + goto release_host; + + err = sdio_set_block_size(func, GB_CPC_SDIO_BLOCK_SIZE); + if (err) + goto disable_func; + + err = cpc_sdio_init(func); + if (err) + goto disable_func; + + err = sdio_claim_irq(func, cpc_sdio_irq_handler); + if (err) + goto disable_func; + + err = cpc_hd_add(cpc_hd); + if (err) + goto release_irq; + + sdio_release_host(func); + + return 0; + +release_irq: + sdio_release_irq(func); + +disable_func: + sdio_disable_func(func); + +release_host: + sdio_release_host(func); + cpc_hd_put(cpc_hd); + kfree(ctx); + + return err; +} + +static void cpc_sdio_remove(struct sdio_func *func) +{ + struct cpc_sdio *ctx = sdio_get_drvdata(func); + struct cpc_host_device *cpc_hd = ctx->cpc_hd; + + /* Prevent new TX wakeups and wake the thread */ + set_bit(CPC_SDIO_FLAG_SHUTDOWN, &ctx->flags); + + /* Cancel and flush any pending TX work */ + cancel_work_sync(&ctx->tx_work); + + sdio_claim_host(func); + sdio_release_irq(func); + sdio_disable_func(func); + sdio_release_host(func); + + cpc_hd_del(cpc_hd); + cpc_hd_put(cpc_hd); + + kfree(ctx); +} + +/* NOTE: Development/RFC purposes only. */ +static const struct sdio_device_id sdio_ids[] = { + { + SDIO_DEVICE(0x0296, 0x5347), + }, + {}, +}; +MODULE_DEVICE_TABLE(sdio, sdio_ids); + +static struct sdio_driver gb_cpc_sdio_driver = { + .name = KBUILD_MODNAME, + .id_table = sdio_ids, + .probe = cpc_sdio_probe, + .remove = cpc_sdio_remove, + .drv = { + .owner = THIS_MODULE, + .name = KBUILD_MODNAME, + }, +}; + +module_sdio_driver(gb_cpc_sdio_driver); + +MODULE_DESCRIPTION("Greybus Host Driver for Silicon Labs devices using SDIO"); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Damien Riégel damien.riegel@silabs.com");
On Fri Dec 12, 2025 at 11:13 AM EST, Damien Riégel wrote:
+#include <linux/atomic.h> +#include <linux/bitops.h> +#include <linux/container_of.h> +#include <linux/delay.h> +#include <linux/device.h> +#include <linux/kthread.h> +#include <linux/minmax.h> +#include <linux/mmc/sdio_func.h> +#include <linux/mmc/sdio_ids.h> +#include <linux/skbuff.h> +#include <linux/slab.h> +#include <linux/wait.h> +#include <linux/workqueue.h>
I think there are a few includes that are not used here (`atomic.h`, `delay.h`, `minmax.h`, `slab.h`).
+/**
- Return the memory requirement in bytes for the aggregated frame aligned to the block size
- */
+static size_t cpc_sdio_get_aligned_size(struct cpc_sdio *ctx, struct sk_buff_head *frame_list) +{
- size_t size = 0;
- struct sk_buff *frame;
Check for reverse xmass tree notation, there are a few occurences in this source file where this is not enforced.
+static unsigned char *cpc_sdio_build_aggregated_frame(struct cpc_sdio *ctx,
struct sk_buff_head *frame_list,size_t *xfer_len)+{
- [...]
- frame_count = (__le32 *)tx_buff;
- *frame_count = cpu_to_le32(skb_queue_len(frame_list));
- i += 4;
`i += sizeof(*frame_count);` to avoid magic value. Also, it is more common to return the size of the built array instead of the array itself, so I would instead pass `char **tx_buff` as an argument and return `xfer_len`.
- /* Copy frame headers to aggregate buffer */
- skb_queue_walk(frame_list, frame) {
memcpy(&tx_buff[i], frame->data, CPC_FRAME_HEADER_SIZE);i += CPC_FRAME_HEADER_SIZE;- }
Declaring a local `struct frame_header*` would be more explicit.
- /* Zero-pad remainder of header block to fill complete SDIO block */
- if (i < GB_CPC_SDIO_BLOCK_SIZE)
memset(&tx_buff[i], 0, GB_CPC_SDIO_BLOCK_SIZE - i);
Remove unnecessary `if`.
+/**
- Process aggregated frame
- Reconstructed frame layout:
- +-----+-----+-----+------+------+------+------+-------+---------+
- | CPC Header (4B) | Size | OpID | Type | Stat | CPort | Payload |
- +-----+-----+-----+------+------+------+------+-------+---------+
- */
+static int cpc_sdio_process_aggregated_frame(struct cpc_sdio *ctx, unsigned char *aggregated_frame,
unsigned int frame_len)+{
- [...]
- /* Ensure frame count doesn't exceed our negotiated maximum */
- if (frame_count > ctx->max_aggregation) {
dev_warn(ctx->dev,"Process aggregated frame: frame count %u exceeds negotiated maximum %u\n",frame_count, ctx->max_aggregation);//frame_count = ctx->effective_max_aggregation;- }
First off, remove inline comment. Also, this function returns an integer that is never checked by the caller, so change the reurn type to `void`. I think the solution to handling this error is to simply return.
- /* Header starts at block 0 after frame count */
- header = (struct frame_header *)&aggregated_frame[sizeof(__le32)];
Use `sizeof(frame_count)` to make this more explicit, and make it easier to maintain if `frame_count` ever changes type.
- for (unsigned int i = 0; i < frame_count; i++) {
No need for `i` to be unsigned, just use an `int` to alleviate the code.
/* Allocate sk_buff for reconstructed frame */rx_skb = alloc_skb(frame_size, GFP_KERNEL);if (rx_skb) {/* Copy header */memcpy(skb_put(rx_skb, CPC_FRAME_HEADER_SIZE), header,CPC_FRAME_HEADER_SIZE);/* Copy payload */if (payload_size > 0)memcpy(skb_put(rx_skb, payload_size), payload_start, payload_size);/* Send reconstructed frame to CPC core */cpc_hd_rcvd(ctx->cpc_hd, rx_skb);}/* else: allocation failed, skip this frame but continue processing */
No? If we're not able to allocate, we should just return. Change the `if` to check for a failed allocation and return. This has the added benefit of keeping the nominal path unindented.
+static u32 cpc_sdio_get_rx_num_bytes(struct sdio_func *func, int *err) +{
- unsigned int rx_num_block_addr = 0x0C;
- return sdio_readl(func, rx_num_block_addr, err);
+}
Have `0x0C` in a `GB_CPC_SDIO_RX_BLOCK_CNT_ADDR` define for better readability.
+static void gb_cpc_sdio_tx(struct cpc_sdio *ctx) +{ +cleanup_frames:
- /* Clean up any remaining frames in the list */
- skb_queue_purge(&frame_list);
Misleading comment, since `frame_list` will always have frames left in it, as they are never removed during TX.
+static void gb_cpc_sdio_rx_tx(struct cpc_sdio *ctx) +{
- gb_cpc_sdio_rx(ctx);
- set_bit(CPC_SDIO_FLAG_IRQ_RUNNING, &ctx->flags);
- gb_cpc_sdio_tx(ctx);
- clear_bit(CPC_SDIO_FLAG_IRQ_RUNNING, &ctx->flags);
+}
This is very surprising to me, why are we processing our TX in the RX IRQ? This seems entirely unnecessary. It feels like we could rework this and remove `CPC_SDIO_FLAG_IRQ_RUNNING`.
Thanks,