From: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com
Hello,
Plese find the v5:
v5 (22-Apr-2025) - Further fix for 32-bit ARM alignment in tcp.c (Simon Horman horms@kernel.org)
v4 (18-Apr-2025) - Fix 32-bit ARM assertion for alignment requirement (Simon Horman horms@kernel.org)
v3 (14-Apr-2025) - Fix patch apply issue in v2 (Jakub Kicinski kuba@kernel.org)
v2 (18-Mar-2025) - Add one missing patch from previous AccECN protocol preparation patch series to this patch series
The full patch series can be found in https://github.com/L4STeam/linux-net-next/commits/upstream_l4steam/
The Accurate ECN draft can be found in https://datatracker.ietf.org/doc/html/draft-ietf-tcpm-accurate-ecn-28
Best regards, Chia-Yu
Chia-Yu Chang (1): tcp: accecn: AccECN option failure handling
Ilpo Järvinen (14): tcp: reorganize SYN ECN code tcp: fast path functions later tcp: AccECN core tcp: accecn: AccECN negotiation tcp: accecn: add AccECN rx byte counters tcp: accecn: AccECN needs to know delivered bytes tcp: allow embedding leftover into option padding tcp: sack option handling improvements tcp: accecn: AccECN option tcp: accecn: AccECN option send control tcp: accecn: AccECN option ceb/cep heuristic tcp: accecn: AccECN ACE field multi-wrap heuristic tcp: accecn: try to fit AccECN option with SACK tcp: try to avoid safer when ACKs are thinned
include/linux/tcp.h | 27 +- include/net/netns/ipv4.h | 2 + include/net/tcp.h | 198 +++++++++++-- include/uapi/linux/tcp.h | 7 + net/ipv4/syncookies.c | 3 + net/ipv4/sysctl_net_ipv4.c | 19 ++ net/ipv4/tcp.c | 26 +- net/ipv4/tcp_input.c | 591 +++++++++++++++++++++++++++++++++++-- net/ipv4/tcp_ipv4.c | 5 +- net/ipv4/tcp_minisocks.c | 92 +++++- net/ipv4/tcp_output.c | 302 +++++++++++++++++-- net/ipv6/syncookies.c | 1 + net/ipv6/tcp_ipv6.c | 1 + 13 files changed, 1178 insertions(+), 96 deletions(-)
From: Ilpo Järvinen ij@kernel.org
Prepare for AccECN that needs to have access here on IP ECN field value which is only available after INET_ECN_xmit().
No functional changes.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com Reviewed-by: Eric Dumazet edumazet@google.com --- net/ipv4/tcp_output.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 13295a59d22e..9a1ab946ff62 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -350,10 +350,11 @@ static void tcp_ecn_send_syn(struct sock *sk, struct sk_buff *skb) tp->ecn_flags = 0;
if (use_ecn) { - TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_ECE | TCPHDR_CWR; - tcp_ecn_mode_set(tp, TCP_ECN_MODE_RFC3168); if (tcp_ca_needs_ecn(sk) || bpf_needs_ecn) INET_ECN_xmit(sk); + + TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_ECE | TCPHDR_CWR; + tcp_ecn_mode_set(tp, TCP_ECN_MODE_RFC3168); } }
From: Ilpo Järvinen ij@kernel.org
The following patch will use tcp_ecn_mode_accecn(), TCP_ACCECN_CEP_INIT_OFFSET, TCP_ACCECN_CEP_ACE_MASK in __tcp_fast_path_on() to make new flag for AccECN.
No functional changes.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chai-Yu Chang chia-yu.chang@nokia-bell-labs.com --- include/net/tcp.h | 54 +++++++++++++++++++++++------------------------ 1 file changed, 27 insertions(+), 27 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h index 5078ad868fee..4dacd4a11669 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -812,33 +812,6 @@ static inline u32 __tcp_set_rto(const struct tcp_sock *tp) return usecs_to_jiffies((tp->srtt_us >> 3) + tp->rttvar_us); }
-static inline void __tcp_fast_path_on(struct tcp_sock *tp, u32 snd_wnd) -{ - /* mptcp hooks are only on the slow path */ - if (sk_is_mptcp((struct sock *)tp)) - return; - - tp->pred_flags = htonl((tp->tcp_header_len << 26) | - ntohl(TCP_FLAG_ACK) | - snd_wnd); -} - -static inline void tcp_fast_path_on(struct tcp_sock *tp) -{ - __tcp_fast_path_on(tp, tp->snd_wnd >> tp->rx_opt.snd_wscale); -} - -static inline void tcp_fast_path_check(struct sock *sk) -{ - struct tcp_sock *tp = tcp_sk(sk); - - if (RB_EMPTY_ROOT(&tp->out_of_order_queue) && - tp->rcv_wnd && - atomic_read(&sk->sk_rmem_alloc) < sk->sk_rcvbuf && - !tp->urg_data) - tcp_fast_path_on(tp); -} - u32 tcp_delack_max(const struct sock *sk);
/* Compute the actual rto_min value */ @@ -1798,6 +1771,33 @@ static inline bool tcp_paws_reject(const struct tcp_options_received *rx_opt, return true; }
+static inline void __tcp_fast_path_on(struct tcp_sock *tp, u32 snd_wnd) +{ + /* mptcp hooks are only on the slow path */ + if (sk_is_mptcp((struct sock *)tp)) + return; + + tp->pred_flags = htonl((tp->tcp_header_len << 26) | + ntohl(TCP_FLAG_ACK) | + snd_wnd); +} + +static inline void tcp_fast_path_on(struct tcp_sock *tp) +{ + __tcp_fast_path_on(tp, tp->snd_wnd >> tp->rx_opt.snd_wscale); +} + +static inline void tcp_fast_path_check(struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + + if (RB_EMPTY_ROOT(&tp->out_of_order_queue) && + tp->rcv_wnd && + atomic_read(&sk->sk_rmem_alloc) < sk->sk_rcvbuf && + !tp->urg_data) + tcp_fast_path_on(tp); +} + bool tcp_oow_rate_limited(struct net *net, const struct sk_buff *skb, int mib_idx, u32 *last_oow_ack_time);
From: Ilpo Järvinen ij@kernel.org
This change implements Accurate ECN without negotiation and AccECN Option (that will be added by later changes). Based on AccECN specifications: https://tools.ietf.org/id/draft-ietf-tcpm-accurate-ecn-28.txt
Accurate ECN allows feeding back the number of CE (congestion experienced) marks accurately to the sender in contrast to RFC3168 ECN that can only signal one marks-seen-yes/no per RTT. Congestion control algorithms can take advantage of the accurate ECN information to fine-tune their congestion response to avoid drastic rate reduction when only mild congestion is encountered.
With Accurate ECN, tp->received_ce (r.cep in AccECN spec) keeps track of how many segments have arrived with a CE mark. Accurate ECN uses ACE field (ECE, CWR, AE) to communicate the value back to the sender which updates tp->delivered_ce (s.cep) based on the feedback. This signalling channel is lossy when ACE field overflow occurs.
Conservative strategy is selected here to deal with the ACE overflow, however, some strategies using the AccECN option later in the overall patchset mitigate against false overflows detected.
The ACE field values on the wire are offset by TCP_ACCECN_CEP_INIT_OFFSET. Delivered_ce/received_ce count the real CE marks rather than forcing all downstream users to adapt to the wire offset.
Co-developed-by: Olivier Tilmans olivier.tilmans@nokia.com Signed-off-by: Olivier Tilmans olivier.tilmans@nokia.com Signed-off-by: Ilpo Järvinen ij@kernel.org Co-developed-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- include/linux/tcp.h | 3 ++ include/net/tcp.h | 26 +++++++++ net/ipv4/tcp.c | 4 +- net/ipv4/tcp_input.c | 121 +++++++++++++++++++++++++++++++++++++----- net/ipv4/tcp_output.c | 21 +++++++- 5 files changed, 160 insertions(+), 15 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 1669d95bb0f9..e36018203bd0 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -298,6 +298,9 @@ struct tcp_sock { u32 snd_up; /* Urgent pointer */ u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */ + u32 received_ce; /* Like the above but for rcvd CE marked pkts */ + u8 received_ce_pending:4, /* Not yet transmit cnt of received_ce */ + unused2:4; u32 app_limited; /* limited until "delivered" reaches this val */ u32 rcv_wnd; /* Current receiver window */ /* diff --git a/include/net/tcp.h b/include/net/tcp.h index 4dacd4a11669..cc28255deef7 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -415,6 +415,11 @@ static inline void tcp_ecn_mode_set(struct tcp_sock *tp, u8 mode) tp->ecn_flags |= mode; }
+static inline u8 tcp_accecn_ace(const struct tcphdr *th) +{ + return (th->ae << 2) | (th->cwr << 1) | th->ece; +} + enum tcp_tw_status { TCP_TW_SUCCESS = 0, TCP_TW_RST = 1, @@ -964,6 +969,20 @@ static inline u32 tcp_rsk_tsval(const struct tcp_request_sock *treq) #define TCPHDR_ACE (TCPHDR_ECE | TCPHDR_CWR | TCPHDR_AE) #define TCPHDR_SYN_ECN (TCPHDR_SYN | TCPHDR_ECE | TCPHDR_CWR)
+#define TCP_ACCECN_CEP_ACE_MASK 0x7 +#define TCP_ACCECN_ACE_MAX_DELTA 6 + +/* To avoid/detect middlebox interference, not all counters start at 0. + * See draft-ietf-tcpm-accurate-ecn for the latest values. + */ +#define TCP_ACCECN_CEP_INIT_OFFSET 5 + +static inline void tcp_accecn_init_counters(struct tcp_sock *tp) +{ + tp->received_ce = 0; + tp->received_ce_pending = 0; +} + /* State flags for sacked in struct tcp_skb_cb */ enum tcp_skb_cb_sacked_flags { TCPCB_SACKED_ACKED = (1 << 0), /* SKB ACK'd by a SACK block */ @@ -1773,11 +1792,18 @@ static inline bool tcp_paws_reject(const struct tcp_options_received *rx_opt,
static inline void __tcp_fast_path_on(struct tcp_sock *tp, u32 snd_wnd) { + u32 ace; + /* mptcp hooks are only on the slow path */ if (sk_is_mptcp((struct sock *)tp)) return;
+ ace = tcp_ecn_mode_accecn(tp) ? + ((tp->delivered_ce + TCP_ACCECN_CEP_INIT_OFFSET) & + TCP_ACCECN_CEP_ACE_MASK) : 0; + tp->pred_flags = htonl((tp->tcp_header_len << 26) | + (ace << 22) | ntohl(TCP_FLAG_ACK) | snd_wnd); } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index e0e96f8fd47c..372c58170f4c 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -3364,6 +3364,7 @@ int tcp_disconnect(struct sock *sk, int flags) tp->window_clamp = 0; tp->delivered = 0; tp->delivered_ce = 0; + tcp_accecn_init_counters(tp); if (icsk->icsk_ca_initialized && icsk->icsk_ca_ops->release) icsk->icsk_ca_ops->release(sk); memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv)); @@ -5088,6 +5089,7 @@ static void __init tcp_struct_check(void) CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, snd_up); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, delivered); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, delivered_ce); + CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, received_ce); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, app_limited); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, rcv_wnd); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, rx_opt); @@ -5095,7 +5097,7 @@ static void __init tcp_struct_check(void) /* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */ - CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 92 + 4); + CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 97 + 7);
/* RX read-write hotpath cache lines */ CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_rx, bytes_received); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index a35018e2d0ba..8dbb625f5e8a 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -341,14 +341,17 @@ static bool tcp_in_quickack_mode(struct sock *sk)
static void tcp_ecn_queue_cwr(struct tcp_sock *tp) { + /* Do not set CWR if in AccECN mode! */ if (tcp_ecn_mode_rfc3168(tp)) tp->ecn_flags |= TCP_ECN_QUEUE_CWR; }
static void tcp_ecn_accept_cwr(struct sock *sk, const struct sk_buff *skb) { - if (tcp_hdr(skb)->cwr) { - tcp_sk(sk)->ecn_flags &= ~TCP_ECN_DEMAND_CWR; + struct tcp_sock *tp = tcp_sk(sk); + + if (tcp_ecn_mode_rfc3168(tp) && tcp_hdr(skb)->cwr) { + tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
/* If the sender is telling us it has entered CWR, then its * cwnd may be very low (even just 1 packet), so we should ACK @@ -384,17 +387,16 @@ static void tcp_data_ecn_check(struct sock *sk, const struct sk_buff *skb) if (tcp_ca_needs_ecn(sk)) tcp_ca_event(sk, CA_EVENT_ECN_IS_CE);
- if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) { + if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR) && + tcp_ecn_mode_rfc3168(tp)) { /* Better not delay acks, sender can have a very low cwnd */ tcp_enter_quickack_mode(sk, 2); tp->ecn_flags |= TCP_ECN_DEMAND_CWR; } - tp->ecn_flags |= TCP_ECN_SEEN; break; default: if (tcp_ca_needs_ecn(sk)) tcp_ca_event(sk, CA_EVENT_ECN_NO_CE); - tp->ecn_flags |= TCP_ECN_SEEN; break; } } @@ -428,10 +430,64 @@ static void tcp_count_delivered(struct tcp_sock *tp, u32 delivered, bool ece_ack) { tp->delivered += delivered; - if (ece_ack) + if (tcp_ecn_mode_rfc3168(tp) && ece_ack) tcp_count_delivered_ce(tp, delivered); }
+/* Returns the ECN CE delta */ +static u32 __tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, + u32 delivered_pkts, int flag) +{ + const struct tcphdr *th = tcp_hdr(skb); + struct tcp_sock *tp = tcp_sk(sk); + u32 delta, safe_delta; + u32 corrected_ace; + + /* Reordered ACK or uncertain due to lack of data to send and ts */ + if (!(flag & (FLAG_FORWARD_PROGRESS | FLAG_TS_PROGRESS))) + return 0; + + if (!(flag & FLAG_SLOWPATH)) { + /* AccECN counter might overflow on large ACKs */ + if (delivered_pkts <= TCP_ACCECN_CEP_ACE_MASK) + return 0; + } + + /* ACE field is not available during handshake */ + if (flag & FLAG_SYN_ACKED) + return 0; + + if (tp->received_ce_pending >= TCP_ACCECN_ACE_MAX_DELTA) + inet_csk(sk)->icsk_ack.pending |= ICSK_ACK_NOW; + + corrected_ace = tcp_accecn_ace(th) - TCP_ACCECN_CEP_INIT_OFFSET; + delta = (corrected_ace - tp->delivered_ce) & TCP_ACCECN_CEP_ACE_MASK; + if (delivered_pkts <= TCP_ACCECN_CEP_ACE_MASK) + return delta; + + safe_delta = delivered_pkts - + ((delivered_pkts - delta) & TCP_ACCECN_CEP_ACE_MASK); + + return safe_delta; +} + +static u32 tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, + u32 delivered_pkts, int *flag) +{ + struct tcp_sock *tp = tcp_sk(sk); + u32 delta; + + delta = __tcp_accecn_process(sk, skb, delivered_pkts, *flag); + if (delta > 0) { + tcp_count_delivered_ce(tp, delta); + *flag |= FLAG_ECE; + /* Recalculate header predictor */ + if (tp->pred_flags) + tcp_fast_path_on(tp); + } + return delta; +} + /* Buffer size and advertised window tuning. * * 1. Tuning sk->sk_sndbuf, when connection enters established state. @@ -3919,7 +3975,8 @@ static void tcp_xmit_recovery(struct sock *sk, int rexmit) }
/* Returns the number of packets newly acked or sacked by the current ACK */ -static u32 tcp_newly_delivered(struct sock *sk, u32 prior_delivered, int flag) +static u32 tcp_newly_delivered(struct sock *sk, u32 prior_delivered, + u32 ecn_count, int flag) { const struct net *net = sock_net(sk); struct tcp_sock *tp = tcp_sk(sk); @@ -3927,8 +3984,12 @@ static u32 tcp_newly_delivered(struct sock *sk, u32 prior_delivered, int flag)
delivered = tp->delivered - prior_delivered; NET_ADD_STATS(net, LINUX_MIB_TCPDELIVERED, delivered); - if (flag & FLAG_ECE) - NET_ADD_STATS(net, LINUX_MIB_TCPDELIVEREDCE, delivered); + + if (flag & FLAG_ECE) { + if (tcp_ecn_mode_rfc3168(tp)) + ecn_count = delivered; + NET_ADD_STATS(net, LINUX_MIB_TCPDELIVEREDCE, ecn_count); + }
return delivered; } @@ -3949,6 +4010,7 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) u32 delivered = tp->delivered; u32 lost = tp->lost; int rexmit = REXMIT_NONE; /* Flag to (re)transmit to recover losses */ + u32 ecn_count = 0; /* Did we receive ECE/an AccECN ACE update? */ u32 prior_fack;
sack_state.first_sackt = 0; @@ -4056,6 +4118,11 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
tcp_rack_update_reo_wnd(sk, &rs);
+ if (tcp_ecn_mode_accecn(tp)) + ecn_count = tcp_accecn_process(sk, skb, + tp->delivered - delivered, + &flag); + tcp_in_ack_event(sk, flag);
if (tp->tlp_high_seq) @@ -4080,7 +4147,8 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) if ((flag & FLAG_FORWARD_PROGRESS) || !(flag & FLAG_NOT_DUP)) sk_dst_confirm(sk);
- delivered = tcp_newly_delivered(sk, delivered, flag); + delivered = tcp_newly_delivered(sk, delivered, ecn_count, flag); + lost = tp->lost - lost; /* freshly marked lost */ rs.is_ack_delayed = !!(flag & FLAG_ACK_MAYBE_DELAYED); tcp_rate_gen(sk, delivered, lost, is_sack_reneg, sack_state.rate); @@ -4089,12 +4157,16 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) return 1;
no_queue: + if (tcp_ecn_mode_accecn(tp)) + ecn_count = tcp_accecn_process(sk, skb, + tp->delivered - delivered, + &flag); tcp_in_ack_event(sk, flag); /* If data was DSACKed, see if we can undo a cwnd reduction. */ if (flag & FLAG_DSACKING_ACK) { tcp_fastretrans_alert(sk, prior_snd_una, num_dupack, &flag, &rexmit); - tcp_newly_delivered(sk, delivered, flag); + tcp_newly_delivered(sk, delivered, ecn_count, flag); } /* If this ack opens up a zero window, clear backoff. It was * being used to time the probes, and is probably far higher than @@ -4115,7 +4187,7 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) &sack_state); tcp_fastretrans_alert(sk, prior_snd_una, num_dupack, &flag, &rexmit); - tcp_newly_delivered(sk, delivered, flag); + tcp_newly_delivered(sk, delivered, ecn_count, flag); tcp_xmit_recovery(sk, rexmit); }
@@ -5952,6 +6024,26 @@ static void tcp_urg(struct sock *sk, struct sk_buff *skb, const struct tcphdr *t } }
+/* Updates Accurate ECN received counters from the received IP ECN field */ +static void tcp_ecn_received_counters(struct sock *sk, + const struct sk_buff *skb) +{ + u8 ecnfield = TCP_SKB_CB(skb)->ip_dsfield & INET_ECN_MASK; + u8 is_ce = INET_ECN_is_ce(ecnfield); + struct tcp_sock *tp = tcp_sk(sk); + + if (!INET_ECN_is_not_ect(ecnfield)) { + u32 pcount = is_ce * max_t(u16, 1, skb_shinfo(skb)->gso_segs); + + tp->ecn_flags |= TCP_ECN_SEEN; + + /* ACE counter tracks *all* segments including pure ACKs */ + tp->received_ce += pcount; + tp->received_ce_pending = min(tp->received_ce_pending + pcount, + 0xfU); + } +} + /* Accept RST for rcv_nxt - 1 after a FIN. * When tcp connections are abruptly terminated from Mac OSX (via ^C), a * FIN is sent followed by a RST packet. The RST is sent with the same @@ -6214,6 +6306,8 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb) flag |= __tcp_replace_ts_recent(tp, delta);
+ tcp_ecn_received_counters(sk, skb); + /* We know that such packets are checksummed * on entry. */ @@ -6258,6 +6352,7 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb) /* Bulk data transfer: receiver */ tcp_cleanup_skb(skb); __skb_pull(skb, tcp_header_len); + tcp_ecn_received_counters(sk, skb); eaten = tcp_queue_rcv(sk, skb, &fragstolen);
tcp_event_data_recv(sk, skb); @@ -6298,6 +6393,8 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb) return;
step5: + tcp_ecn_received_counters(sk, skb); + reason = tcp_ack(sk, skb, FLAG_SLOWPATH | FLAG_UPDATE_TS_RECENT); if ((int)reason < 0) { reason = -reason; diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 9a1ab946ff62..9c978d12c7cf 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -374,6 +374,17 @@ tcp_ecn_make_synack(const struct request_sock *req, struct tcphdr *th) th->ece = 1; }
+static void tcp_accecn_set_ace(struct tcphdr *th, struct tcp_sock *tp) +{ + u32 wire_ace; + + wire_ace = tp->received_ce + TCP_ACCECN_CEP_INIT_OFFSET; + th->ece = !!(wire_ace & 0x1); + th->cwr = !!(wire_ace & 0x2); + th->ae = !!(wire_ace & 0x4); + tp->received_ce_pending = 0; +} + /* Set up ECN state for a packet on a ESTABLISHED socket that is about to * be sent. */ @@ -382,11 +393,17 @@ static void tcp_ecn_send(struct sock *sk, struct sk_buff *skb, { struct tcp_sock *tp = tcp_sk(sk);
- if (tcp_ecn_mode_rfc3168(tp)) { + if (!tcp_ecn_mode_any(tp)) + return; + + INET_ECN_xmit(sk); + if (tcp_ecn_mode_accecn(tp)) { + tcp_accecn_set_ace(th, tp); + skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ACCECN; + } else { /* Not-retransmitted data segment: set ECT and inject CWR. */ if (skb->len != tcp_header_len && !before(TCP_SKB_CB(skb)->seq, tp->snd_nxt)) { - INET_ECN_xmit(sk); if (tp->ecn_flags & TCP_ECN_QUEUE_CWR) { tp->ecn_flags &= ~TCP_ECN_QUEUE_CWR; th->cwr = 1;
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -298,6 +298,9 @@ struct tcp_sock { u32 snd_up; /* Urgent pointer */ u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */
- u32 received_ce; /* Like the above but for rcvd CE marked pkts */
- u8 received_ce_pending:4, /* Not yet transmit cnt of received_ce */
unused2:4;
AFAICS this uses a 4 bytes hole present prior to this patch after "rcv_wnd", leaving a 3 bytes hole after 'unused2'. Possibly should be worth mentioning the hole presence.
@Eric: would it make sense use this hole for 'noneagle'/'rate_app_limited' and shrink the 'tcp_sock_write_txrx' group a bit?
[...]
@@ -5095,7 +5097,7 @@ static void __init tcp_struct_check(void) /* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */
- CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 92 + 4);
- CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 97 + 7);
Really? I *think* the change here should not move the cacheline end around, due to holes. Could you please include the relevant pahole (trimmed) output prior to this patch and after in the commit message?
[...]
@@ -384,17 +387,16 @@ static void tcp_data_ecn_check(struct sock *sk, const struct sk_buff *skb) if (tcp_ca_needs_ecn(sk)) tcp_ca_event(sk, CA_EVENT_ECN_IS_CE);
if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) {
if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR) &&
}tcp_ecn_mode_rfc3168(tp)) { /* Better not delay acks, sender can have a very low cwnd */ tcp_enter_quickack_mode(sk, 2); tp->ecn_flags |= TCP_ECN_DEMAND_CWR;
tp->ecn_flags |= TCP_ECN_SEEN;
At this point is not entirely clear to me why the removal of the above line is needed/correct.
[...]
@@ -4056,6 +4118,11 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) tcp_rack_update_reo_wnd(sk, &rs);
- if (tcp_ecn_mode_accecn(tp))
ecn_count = tcp_accecn_process(sk, skb,
tp->delivered - delivered,
&flag);
AFAICS the above could set FLAG_ECE in flags, menaning the previous tcp_clean_rtx_queue() will run with such flag cleared and the later function checking such flag will not. I wondering if this inconsistency could cause problems?
/P
-----Original Message----- From: Paolo Abeni pabeni@redhat.com Sent: Tuesday, April 29, 2025 12:14 PM To: Chia-Yu Chang (Nokia) chia-yu.chang@nokia-bell-labs.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ij@kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Cc: Olivier Tilmans (Nokia) olivier.tilmans@nokia.com Subject: Re: [PATCH v5 net-next 03/15] tcp: AccECN core
CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -298,6 +298,9 @@ struct tcp_sock { u32 snd_up; /* Urgent pointer */ u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */
u32 received_ce; /* Like the above but for rcvd CE marked pkts */
u8 received_ce_pending:4, /* Not yet transmit cnt of received_ce */
unused2:4;
AFAICS this uses a 4 bytes hole present prior to this patch after "rcv_wnd", leaving a 3 bytes hole after 'unused2'. Possibly should be worth mentioning the hole presence.
@Eric: would it make sense use this hole for 'noneagle'/'rate_app_limited' and shrink the 'tcp_sock_write_txrx' group a bit?
Hi Paolo,
Thanks for the feedback and sorry for my late response. I can either mention it here or move the places. However, as the following patches will continue change holes, so maybe I mention the hole change per patch make it more understandable. If this is find for you, then I will make revision in the next version.
[...]
@@ -5095,7 +5097,7 @@ static void __init tcp_struct_check(void) /* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */
CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 92 + 4);
CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock,
- tcp_sock_write_txrx, 97 + 7);
Really? I *think* the change here should not move the cacheline end around, due to holes. Could you please include the relevant pahole (trimmed) output prior to this patch and after in the commit message?
Here is pahole output before and after this patch. Indeed, it creates 3 bytes hole after 'unused2' so it shall add (5+3)=8 to the original 92 + 4. Finally, it will be 92 + 4 + (5 + 3) = 97 + 7. *BEFORE this patch* __u8 __cacheline_group_begin__tcp_sock_write_txrx[0]; /* 2585 0 */
/* XXX 3 bytes hole, try to pack */
__be32 pred_flags; /* 2588 4 */ u64 tcp_clock_cache; /* 2592 8 */ u64 tcp_mstamp; /* 2600 8 */ u32 rcv_nxt; /* 2608 4 */ u32 snd_nxt; /* 2612 4 */ u32 snd_una; /* 2616 4 */ u32 window_clamp; /* 2620 4 */ /* --- cacheline 41 boundary (2624 bytes) --- */ u32 srtt_us; /* 2624 4 */ u32 packets_out; /* 2628 4 */ u32 snd_up; /* 2632 4 */ u32 delivered; /* 2636 4 */ u32 delivered_ce; /* 2640 4 */ u32 app_limited; /* 2644 4 */ u32 rcv_wnd; /* 2648 4 */ struct tcp_options_received rx_opt; /* 2652 24 */ u8 nonagle:4; /* 2676: 0 1 */ u8 rate_app_limited:1; /* 2676: 4 1 */
/* XXX 3 bits hole, try to pack */
__u8 __cacheline_group_end__tcp_sock_write_txrx[0]; /* 2677 0 */
/* XXX 3 bytes hole, try to pack */
*AFTER this patch* __u8 __cacheline_group_begin__tcp_sock_write_txrx[0]; /* 2585 0 */
/* XXX 3 bytes hole, try to pack */
__be32 pred_flags; /* 2588 4 */ u64 tcp_clock_cache; /* 2592 8 */ u64 tcp_mstamp; /* 2600 8 */ u32 rcv_nxt; /* 2608 4 */ u32 snd_nxt; /* 2612 4 */ u32 snd_una; /* 2616 4 */ u32 window_clamp; /* 2620 4 */ /* --- cacheline 41 boundary (2624 bytes) --- */ u32 srtt_us; /* 2624 4 */ u32 packets_out; /* 2628 4 */ u32 snd_up; /* 2632 4 */ u32 delivered; /* 2636 4 */ u32 delivered_ce; /* 2640 4 */ u32 received_ce; /* 2644 4 */ u8 received_ce_pending:4; /* 2648: 0 1 */ u8 unused2:4; /* 2648: 4 1 */
/* XXX 3 bytes hole, try to pack */
u32 app_limited; /* 2652 4 */ u32 rcv_wnd; /* 2656 4 */ struct tcp_options_received rx_opt; /* 2660 24 */ u8 nonagle:4; /* 2684: 0 1 */ u8 rate_app_limited:1; /* 2684: 4 1 */
/* XXX 3 bits hole, try to pack */
__u8 __cacheline_group_end__tcp_sock_write_txrx[0]; /* 2685 0 */
[...]
@@ -384,17 +387,16 @@ static void tcp_data_ecn_check(struct sock *sk, const struct sk_buff *skb) if (tcp_ca_needs_ecn(sk)) tcp_ca_event(sk, CA_EVENT_ECN_IS_CE);
if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) {
if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR) &&
tcp_ecn_mode_rfc3168(tp)) { /* Better not delay acks, sender can have a very low cwnd */ tcp_enter_quickack_mode(sk, 2); tp->ecn_flags |= TCP_ECN_DEMAND_CWR; }
tp->ecn_flags |= TCP_ECN_SEEN;
At this point is not entirely clear to me why the removal of the above line is needed/correct.
What I see is to move the place to set this flag from here to tcp_ecn_received_counters().
Also, this function will work when receiving data by calling tcp_event_data_recv().
While tcp_ecn_received_counters() takes effect in more places (e.g., either len <= tcp_header_len or NOT) to ensure ACE counter tracks all segments including pure ACKs.
[...]
@@ -4056,6 +4118,11 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
tcp_rack_update_reo_wnd(sk, &rs);
if (tcp_ecn_mode_accecn(tp))
ecn_count = tcp_accecn_process(sk, skb,
tp->delivered - delivered,
&flag);
AFAICS the above could set FLAG_ECE in flags, menaning the previous tcp_clean_rtx_queue() will run with such flag cleared and the later function checking such flag will not. I wondering if this inconsistency could cause problems?
This flag set by tcp_accecn_process() will be used by following functions: tcp_in_ack_event(), tcp_fastretrans_alert().
And this shall only impact the AccECN mode.
Best regards, Chia-Yu
/P
-----Original Message----- From: Chia-Yu Chang (Nokia) Sent: Monday, May 5, 2025 5:25 PM To: Paolo Abeni pabeni@redhat.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ij@kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Cc: Olivier Tilmans (Nokia) olivier.tilmans@nokia.com Subject: RE: [PATCH v5 net-next 03/15] tcp: AccECN core
-----Original Message----- From: Paolo Abeni pabeni@redhat.com Sent: Tuesday, April 29, 2025 12:14 PM To: Chia-Yu Chang (Nokia) chia-yu.chang@nokia-bell-labs.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ij@kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Cc: Olivier Tilmans (Nokia) olivier.tilmans@nokia.com Subject: Re: [PATCH v5 net-next 03/15] tcp: AccECN core
CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -298,6 +298,9 @@ struct tcp_sock { u32 snd_up; /* Urgent pointer */ u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */
u32 received_ce; /* Like the above but for rcvd CE marked pkts */
u8 received_ce_pending:4, /* Not yet transmit cnt of received_ce */
unused2:4;
AFAICS this uses a 4 bytes hole present prior to this patch after "rcv_wnd", leaving a 3 bytes hole after 'unused2'. Possibly should be worth mentioning the hole presence.
@Eric: would it make sense use this hole for 'noneagle'/'rate_app_limited' and shrink the 'tcp_sock_write_txrx' group a bit?
Hi,
By moving noneagle/rate_app_limited in the beginning of this group, I manage to reuse the 3-byte hole in the beginning of __cacheline_group_begin__tcp_sock_write_txrx. Thus, I will include it in the next version, and you can find the pahole results below:
/*BEFORE this patch*/ __u8 __cacheline_group_end__tcp_sock_write_tx[0]; /* 2585 0 */ __u8 __cacheline_group_begin__tcp_sock_write_txrx[0]; /* 2585 0 */
/* XXX 3 bytes hole, try to pack */
__be32 pred_flags; /* 2588 4 */ u64 tcp_clock_cache; /* 2592 8 */ u64 tcp_mstamp; /* 2600 8 */ u32 rcv_nxt; /* 2608 4 */ u32 snd_nxt; /* 2612 4 */ u32 snd_una; /* 2616 4 */ u32 window_clamp; /* 2620 4 */ /* --- cacheline 41 boundary (2624 bytes) --- */ u32 srtt_us; /* 2624 4 */ u32 packets_out; /* 2628 4 */ u32 snd_up; /* 2632 4 */ u32 delivered; /* 2636 4 */ u32 delivered_ce; /* 2640 4 */ u32 app_limited; /* 2644 4 */ u32 rcv_wnd; /* 2648 4 */ struct tcp_options_received rx_opt; /* 2652 24 */ u8 nonagle:4; /* 2676: 0 1 */ u8 rate_app_limited:1; /* 2676: 4 1 */
/* XXX 3 bits hole, try to pack */
__u8 __cacheline_group_end__tcp_sock_write_txrx[0]; /* 2677 0 */
/* XXX 3 bytes hole, try to pack */
__u8 __cacheline_group_begin__tcp_sock_write_rx[0] __attribute__((__aligned__(8))); /* 2680 0 */
/*AFTER this patch*/ __u8 __cacheline_group_end__tcp_sock_write_tx[0]; /* 2585 0 */ __u8 __cacheline_group_begin__tcp_sock_write_txrx[0]; /* 2585 0 */ u8 nonagle:4; /* 2585: 0 1 */ u8 rate_app_limited:1; /* 2585: 4 1 */
/* XXX 3 bits hole, try to pack */
/* Force alignment to the next boundary: */ u8 :0;
u8 received_ce_pending:4; /* 2586: 0 1 */ u8 unused2:4; /* 2586: 4 1 */
/* XXX 1 byte hole, try to pack */
__be32 pred_flags; /* 2588 4 */ u64 tcp_clock_cache; /* 2592 8 */ u64 tcp_mstamp; /* 2600 8 */ u32 rcv_nxt; /* 2608 4 */ u32 snd_nxt; /* 2612 4 */ u32 snd_una; /* 2616 4 */ u32 window_clamp; /* 2620 4 */ /* --- cacheline 41 boundary (2624 bytes) --- */ u32 srtt_us; /* 2624 4 */ u32 packets_out; /* 2628 4 */ u32 snd_up; /* 2632 4 */ u32 delivered; /* 2636 4 */ u32 delivered_ce; /* 2640 4 */ u32 received_ce; /* 2644 4 */ u32 app_limited; /* 2648 4 */ u32 rcv_wnd; /* 2652 4 */ struct tcp_options_received rx_opt; /* 2656 24 */ __u8 __cacheline_group_end__tcp_sock_write_txrx[0]; /* 2680 0 */ __u8 __cacheline_group_begin__tcp_sock_write_rx[0] __attribute__((__aligned__(8))); /* 2680 0 */
Hi Paolo,
Thanks for the feedback and sorry for my late response. I can either mention it here or move the places. However, as the following patches will continue change holes, so maybe I mention the hole change per patch make it more understandable. If this is find for you, then I will make revision in the next version.
[...]
@@ -5095,7 +5097,7 @@ static void __init tcp_struct_check(void) /* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */
CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 92 + 4);
CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock,
- tcp_sock_write_txrx, 97 + 7);
Really? I *think* the change here should not move the cacheline end around, due to holes. Could you please include the relevant pahole (trimmed) output prior to this patch and after in the commit message?
Here is pahole output before and after this patch. Indeed, it creates 3 bytes hole after 'unused2' so it shall add (5+3)=8 to the original 92 + 4. Finally, it will be 92 + 4 + (5 + 3) = 97 + 7. *BEFORE this patch* __u8 __cacheline_group_begin__tcp_sock_write_txrx[0]; /* 2585 0 */
/* XXX 3 bytes hole, try to pack */ __be32 pred_flags; /* 2588 4 */ u64 tcp_clock_cache; /* 2592 8 */ u64 tcp_mstamp; /* 2600 8 */ u32 rcv_nxt; /* 2608 4 */ u32 snd_nxt; /* 2612 4 */ u32 snd_una; /* 2616 4 */ u32 window_clamp; /* 2620 4 */ /* --- cacheline 41 boundary (2624 bytes) --- */ u32 srtt_us; /* 2624 4 */ u32 packets_out; /* 2628 4 */ u32 snd_up; /* 2632 4 */ u32 delivered; /* 2636 4 */ u32 delivered_ce; /* 2640 4 */ u32 app_limited; /* 2644 4 */ u32 rcv_wnd; /* 2648 4 */ struct tcp_options_received rx_opt; /* 2652 24 */ u8 nonagle:4; /* 2676: 0 1 */ u8 rate_app_limited:1; /* 2676: 4 1 */ /* XXX 3 bits hole, try to pack */ __u8 __cacheline_group_end__tcp_sock_write_txrx[0]; /* 2677 0 */ /* XXX 3 bytes hole, try to pack */
*AFTER this patch* __u8 __cacheline_group_begin__tcp_sock_write_txrx[0]; /* 2585 0 */
/* XXX 3 bytes hole, try to pack */
__be32 pred_flags; /* 2588 4 */ u64 tcp_clock_cache; /* 2592 8 */ u64 tcp_mstamp; /* 2600 8 */ u32 rcv_nxt; /* 2608 4 */ u32 snd_nxt; /* 2612 4 */ u32 snd_una; /* 2616 4 */ u32 window_clamp; /* 2620 4 */ /* --- cacheline 41 boundary (2624 bytes) --- */ u32 srtt_us; /* 2624 4 */ u32 packets_out; /* 2628 4 */ u32 snd_up; /* 2632 4 */ u32 delivered; /* 2636 4 */ u32 delivered_ce; /* 2640 4 */ u32 received_ce; /* 2644 4 */ u8 received_ce_pending:4; /* 2648: 0 1 */ u8 unused2:4; /* 2648: 4 1 */
/* XXX 3 bytes hole, try to pack */ u32 app_limited; /* 2652 4 */ u32 rcv_wnd; /* 2656 4 */ struct tcp_options_received rx_opt; /* 2660 24 */ u8 nonagle:4; /* 2684: 0 1 */ u8 rate_app_limited:1; /* 2684: 4 1 */ /* XXX 3 bits hole, try to pack */ __u8 __cacheline_group_end__tcp_sock_write_txrx[0]; /* 2685 0 */
[...]
@@ -384,17 +387,16 @@ static void tcp_data_ecn_check(struct sock *sk, const struct sk_buff *skb) if (tcp_ca_needs_ecn(sk)) tcp_ca_event(sk, CA_EVENT_ECN_IS_CE);
if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) {
if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR) &&
tcp_ecn_mode_rfc3168(tp)) { /* Better not delay acks, sender can have a very low cwnd */ tcp_enter_quickack_mode(sk, 2); tp->ecn_flags |= TCP_ECN_DEMAND_CWR; }
tp->ecn_flags |= TCP_ECN_SEEN;
At this point is not entirely clear to me why the removal of the above line is needed/correct.
What I see is to move the place to set this flag from here to tcp_ecn_received_counters().
Also, this function will work when receiving data by calling tcp_event_data_recv().
While tcp_ecn_received_counters() takes effect in more places (e.g., either len <= tcp_header_len or NOT) to ensure ACE counter tracks all segments including pure ACKs.
[...]
@@ -4056,6 +4118,11 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
tcp_rack_update_reo_wnd(sk, &rs);
if (tcp_ecn_mode_accecn(tp))
ecn_count = tcp_accecn_process(sk, skb,
tp->delivered - delivered,
&flag);
AFAICS the above could set FLAG_ECE in flags, menaning the previous tcp_clean_rtx_queue() will run with such flag cleared and the later function checking such flag will not. I wondering if this inconsistency could cause problems?
This flag set by tcp_accecn_process() will be used by following functions: tcp_in_ack_event(), tcp_fastretrans_alert().
And this shall only impact the AccECN mode.
Best regards, Chia-Yu
/P
From: Ilpo Järvinen ij@kernel.org
Accurate ECN negotiation parts based on the specification: https://tools.ietf.org/id/draft-ietf-tcpm-accurate-ecn-28.txt
Accurate ECN is negotiated using ECE, CWR and AE flags in the TCP header. TCP falls back into using RFC3168 ECN if one of the ends supports only RFC3168-style ECN.
The AccECN negotiation includes reflecting IP ECN field value seen in SYN and SYNACK back using the same bits as negotiation to allow responding to SYN CE marks and to detect ECN field mangling. CE marks should not occur currently because SYN=1 segments are sent with Non-ECT in IP ECN field (but proposal exists to remove this restriction).
Reflecting SYN IP ECN field in SYNACK is relatively simple. Reflecting SYNACK IP ECN field in the final/third ACK of the handshake is more challenging. Linux TCP code is not well prepared for using the final/third ACK a signalling channel which makes things somewhat complicated here.
Co-developed-by: Olivier Tilmans olivier.tilmans@nokia.com Signed-off-by: Olivier Tilmans olivier.tilmans@nokia.com Signed-off-by: Ilpo Järvinen ij@kernel.org Co-developed-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- include/linux/tcp.h | 9 ++- include/net/tcp.h | 80 ++++++++++++++++++- net/ipv4/syncookies.c | 3 + net/ipv4/sysctl_net_ipv4.c | 3 +- net/ipv4/tcp.c | 2 + net/ipv4/tcp_input.c | 155 +++++++++++++++++++++++++++++++++---- net/ipv4/tcp_ipv4.c | 3 +- net/ipv4/tcp_minisocks.c | 51 ++++++++++-- net/ipv4/tcp_output.c | 78 +++++++++++++++---- net/ipv6/syncookies.c | 1 + net/ipv6/tcp_ipv6.c | 1 + 11 files changed, 343 insertions(+), 43 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index e36018203bd0..af38fff24aa4 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -156,6 +156,10 @@ struct tcp_request_sock { #if IS_ENABLED(CONFIG_MPTCP) bool drop_req; #endif + u8 accecn_ok : 1, + syn_ect_snt: 2, + syn_ect_rcv: 2; + u8 accecn_fail_mode:4; u32 txhash; u32 rcv_isn; u32 snt_isn; @@ -376,7 +380,10 @@ struct tcp_sock { u8 compressed_ack; u8 dup_ack_counter:2, tlp_retrans:1, /* TLP is a retransmission */ - unused:5; + syn_ect_snt:2, /* AccECN ECT memory, only */ + syn_ect_rcv:2, /* ... needed durign 3WHS + first seqno */ + wait_third_ack:1; /* Wait 3rd ACK in simultaneous open */ + u8 accecn_fail_mode:4; /* AccECN failure handling */ u8 thin_lto : 1,/* Use linear timeouts for thin streams */ fastopen_connect:1, /* FASTOPEN_CONNECT sockopt */ fastopen_no_cookie:1, /* Allow send/recv SYN+data without a cookie */ diff --git a/include/net/tcp.h b/include/net/tcp.h index cc28255deef7..f36a1a3d538f 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -27,6 +27,7 @@ #include <linux/ktime.h> #include <linux/indirect_call_wrapper.h> #include <linux/bits.h> +#include <linux/bitfield.h>
#include <net/inet_connection_sock.h> #include <net/inet_timewait_sock.h> @@ -234,6 +235,37 @@ static_assert((1 << ATO_BITS) > TCP_DELACK_MAX); #define TCPOLEN_MSS_ALIGNED 4 #define TCPOLEN_EXP_SMC_BASE_ALIGNED 8
+/* tp->accecn_fail_mode */ +#define TCP_ACCECN_ACE_FAIL_SEND BIT(0) +#define TCP_ACCECN_ACE_FAIL_RECV BIT(1) +#define TCP_ACCECN_OPT_FAIL_SEND BIT(2) +#define TCP_ACCECN_OPT_FAIL_RECV BIT(3) + +static inline bool tcp_accecn_ace_fail_send(const struct tcp_sock *tp) +{ + return tp->accecn_fail_mode & TCP_ACCECN_ACE_FAIL_SEND; +} + +static inline bool tcp_accecn_ace_fail_recv(const struct tcp_sock *tp) +{ + return tp->accecn_fail_mode & TCP_ACCECN_ACE_FAIL_RECV; +} + +static inline bool tcp_accecn_opt_fail_send(const struct tcp_sock *tp) +{ + return tp->accecn_fail_mode & TCP_ACCECN_OPT_FAIL_SEND; +} + +static inline bool tcp_accecn_opt_fail_recv(const struct tcp_sock *tp) +{ + return tp->accecn_fail_mode & TCP_ACCECN_OPT_FAIL_RECV; +} + +static inline void tcp_accecn_fail_mode_set(struct tcp_sock *tp, u8 mode) +{ + tp->accecn_fail_mode |= mode; +} + /* Flags in tp->nonagle */ #define TCP_NAGLE_OFF 1 /* Nagle's algo is disabled */ #define TCP_NAGLE_CORK 2 /* Socket is corked */ @@ -420,6 +452,23 @@ static inline u8 tcp_accecn_ace(const struct tcphdr *th) return (th->ae << 2) | (th->cwr << 1) | th->ece; }
+/* Infer the ECT value our SYN arrived with from the echoed ACE field */ +static inline int tcp_accecn_extract_syn_ect(u8 ace) +{ + if (ace & 0x1) + return INET_ECN_ECT_1; + if (!(ace & 0x2)) + return INET_ECN_ECT_0; + if (ace & 0x4) + return INET_ECN_CE; + return INET_ECN_NOT_ECT; +} + +bool tcp_accecn_validate_syn_feedback(struct sock *sk, u8 ace, u8 sent_ect); +void tcp_accecn_third_ack(struct sock *sk, const struct sk_buff *skb, + u8 syn_ect_snt); +void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb); + enum tcp_tw_status { TCP_TW_SUCCESS = 0, TCP_TW_RST = 1, @@ -657,6 +706,15 @@ static inline bool cookie_ecn_ok(const struct net *net, const struct dst_entry * dst_feature(dst, RTAX_FEATURE_ECN); }
+/* AccECN specification, 5.1: [...] a server can determine that it + * negotiated AccECN as [...] if the ACK contains an ACE field with + * the value 0b010 to 0b111 (decimal 2 to 7). + */ +static inline bool cookie_accecn_ok(const struct tcphdr *th) +{ + return tcp_accecn_ace(th) > 0x1; +} + #if IS_ENABLED(CONFIG_BPF) static inline bool cookie_bpf_ok(struct sk_buff *skb) { @@ -968,6 +1026,7 @@ static inline u32 tcp_rsk_tsval(const struct tcp_request_sock *treq)
#define TCPHDR_ACE (TCPHDR_ECE | TCPHDR_CWR | TCPHDR_AE) #define TCPHDR_SYN_ECN (TCPHDR_SYN | TCPHDR_ECE | TCPHDR_CWR) +#define TCPHDR_SYNACK_ACCECN (TCPHDR_SYN | TCPHDR_ACK | TCPHDR_CWR)
#define TCP_ACCECN_CEP_ACE_MASK 0x7 #define TCP_ACCECN_ACE_MAX_DELTA 6 @@ -1051,6 +1110,15 @@ struct tcp_skb_cb {
#define TCP_SKB_CB(__skb) ((struct tcp_skb_cb *)&((__skb)->cb[0]))
+static inline u16 tcp_accecn_reflector_flags(u8 ect) +{ + u32 flags = ect + 2; + + if (ect == 3) + flags++; + return FIELD_PREP(TCPHDR_ACE, flags); +} + extern const struct inet_connection_sock_af_ops ipv4_specific;
#if IS_ENABLED(CONFIG_IPV6) @@ -1173,7 +1241,10 @@ enum tcp_ca_ack_event_flags { #define TCP_CONG_NON_RESTRICTED BIT(0) /* Requires ECN/ECT set on all packets */ #define TCP_CONG_NEEDS_ECN BIT(1) -#define TCP_CONG_MASK (TCP_CONG_NON_RESTRICTED | TCP_CONG_NEEDS_ECN) +/* Require successfully negotiated AccECN capability */ +#define TCP_CONG_NEEDS_ACCECN BIT(2) +#define TCP_CONG_MASK (TCP_CONG_NON_RESTRICTED | TCP_CONG_NEEDS_ECN | \ + TCP_CONG_NEEDS_ACCECN)
union tcp_cc_info;
@@ -1305,6 +1376,13 @@ static inline bool tcp_ca_needs_ecn(const struct sock *sk) return icsk->icsk_ca_ops->flags & TCP_CONG_NEEDS_ECN; }
+static inline bool tcp_ca_needs_accecn(const struct sock *sk) +{ + const struct inet_connection_sock *icsk = inet_csk(sk); + + return icsk->icsk_ca_ops->flags & TCP_CONG_NEEDS_ACCECN; +} + static inline void tcp_ca_event(struct sock *sk, const enum tcp_ca_event event) { const struct inet_connection_sock *icsk = inet_csk(sk); diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c index 5459a78b9809..3a44eb9c1d1a 100644 --- a/net/ipv4/syncookies.c +++ b/net/ipv4/syncookies.c @@ -403,6 +403,7 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb) struct tcp_sock *tp = tcp_sk(sk); struct inet_request_sock *ireq; struct net *net = sock_net(sk); + struct tcp_request_sock *treq; struct request_sock *req; struct sock *ret = sk; struct flowi4 fl4; @@ -428,6 +429,7 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb) }
ireq = inet_rsk(req); + treq = tcp_rsk(req);
sk_rcv_saddr_set(req_to_sk(req), ip_hdr(skb)->daddr); sk_daddr_set(req_to_sk(req), ip_hdr(skb)->saddr); @@ -482,6 +484,7 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb) if (!req->syncookie) ireq->rcv_wscale = rcv_wscale; ireq->ecn_ok &= cookie_ecn_ok(net, &rt->dst); + treq->accecn_ok = ireq->ecn_ok && cookie_accecn_ok(th);
ret = tcp_get_cookie_sock(sk, skb, req, &rt->dst); /* ip_queue_xmit() depends on our flow being setup diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c index 3a43010d726f..75ec1a599b52 100644 --- a/net/ipv4/sysctl_net_ipv4.c +++ b/net/ipv4/sysctl_net_ipv4.c @@ -47,6 +47,7 @@ static unsigned int udp_child_hash_entries_max = UDP_HTABLE_SIZE_MAX; static int tcp_plb_max_rounds = 31; static int tcp_plb_max_cong_thresh = 256; static unsigned int tcp_tw_reuse_delay_max = TCP_PAWS_MSL * MSEC_PER_SEC; +static int tcp_ecn_mode_max = 5;
/* obsolete */ static int sysctl_tcp_low_latency __read_mostly; @@ -728,7 +729,7 @@ static struct ctl_table ipv4_net_table[] = { .mode = 0644, .proc_handler = proc_dou8vec_minmax, .extra1 = SYSCTL_ZERO, - .extra2 = SYSCTL_TWO, + .extra2 = &tcp_ecn_mode_max, }, { .procname = "tcp_ecn_fallback", diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 372c58170f4c..73f8cc715bff 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -3364,6 +3364,8 @@ int tcp_disconnect(struct sock *sk, int flags) tp->window_clamp = 0; tp->delivered = 0; tp->delivered_ce = 0; + tp->wait_third_ack = 0; + tp->accecn_fail_mode = 0; tcp_accecn_init_counters(tp); if (icsk->icsk_ca_initialized && icsk->icsk_ca_ops->release) icsk->icsk_ca_ops->release(sk); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 8dbb625f5e8a..cc34664805f8 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -401,14 +401,93 @@ static void tcp_data_ecn_check(struct sock *sk, const struct sk_buff *skb) } }
-static void tcp_ecn_rcv_synack(struct tcp_sock *tp, const struct tcphdr *th) +/* AccECN specificaiton, 3.1.2: If a TCP server that implements AccECN + * receives a SYN with the three TCP header flags (AE, CWR and ECE) set + * to any combination other than 000, 011 or 111, it MUST negotiate the + * use of AccECN as if they had been set to 111. + */ +static bool tcp_accecn_syn_requested(const struct tcphdr *th) +{ + u8 ace = tcp_accecn_ace(th); + + return ace && ace != 0x3; +} + +/* Check ECN field transition to detect invalid transitions */ +static bool tcp_ect_transition_valid(u8 snt, u8 rcv) +{ + if (rcv == snt) + return true; + + /* Non-ECT altered to something or something became non-ECT */ + if (snt == INET_ECN_NOT_ECT || rcv == INET_ECN_NOT_ECT) + return false; + /* CE -> ECT(0/1)? */ + if (snt == INET_ECN_CE) + return false; + return true; +} + +bool tcp_accecn_validate_syn_feedback(struct sock *sk, u8 ace, u8 sent_ect) { - if (tcp_ecn_mode_rfc3168(tp) && (!th->ece || th->cwr)) + u8 ect = tcp_accecn_extract_syn_ect(ace); + struct tcp_sock *tp = tcp_sk(sk); + + if (!sock_net(sk)->ipv4.sysctl_tcp_ecn_fallback) + return true; + + if (!tcp_ect_transition_valid(sent_ect, ect)) { + tcp_accecn_fail_mode_set(tp, TCP_ACCECN_ACE_FAIL_RECV); + return false; + } + + return true; +} + +/* See Table 2 of the AccECN draft */ +static void tcp_ecn_rcv_synack(struct sock *sk, const struct tcphdr *th, + u8 ip_dsfield) +{ + struct tcp_sock *tp = tcp_sk(sk); + u8 ace = tcp_accecn_ace(th); + + switch (ace) { + case 0x0: + case 0x7: tcp_ecn_mode_set(tp, TCP_ECN_DISABLED); + break; + case 0x1: + case 0x5: + if (tcp_ecn_mode_pending(tp)) + /* Downgrade from AccECN, or requested initially */ + tcp_ecn_mode_set(tp, TCP_ECN_MODE_RFC3168); + break; + default: + tcp_ecn_mode_set(tp, TCP_ECN_MODE_ACCECN); + tp->syn_ect_rcv = ip_dsfield & INET_ECN_MASK; + if (INET_ECN_is_ce(ip_dsfield) && + tcp_accecn_validate_syn_feedback(sk, ace, + tp->syn_ect_snt)) { + tp->received_ce++; + tp->received_ce_pending++; + } + break; + } }
-static void tcp_ecn_rcv_syn(struct tcp_sock *tp, const struct tcphdr *th) +static void tcp_ecn_rcv_syn(struct tcp_sock *tp, const struct tcphdr *th, + const struct sk_buff *skb) { + if (tcp_ecn_mode_pending(tp)) { + if (!tcp_accecn_syn_requested(th)) { + /* Downgrade to classic ECN feedback */ + tcp_ecn_mode_set(tp, TCP_ECN_MODE_RFC3168); + } else { + tp->syn_ect_rcv = TCP_SKB_CB(skb)->ip_dsfield & + INET_ECN_MASK; + tcp_ecn_mode_set(tp, TCP_ECN_MODE_ACCECN); + } + } if (tcp_ecn_mode_rfc3168(tp) && (!th->ece || !th->cwr)) tcp_ecn_mode_set(tp, TCP_ECN_DISABLED); } @@ -3834,7 +3913,7 @@ bool tcp_oow_rate_limited(struct net *net, const struct sk_buff *skb, }
/* RFC 5961 7 [ACK Throttling] */ -static void tcp_send_challenge_ack(struct sock *sk) +static void tcp_send_challenge_ack(struct sock *sk, bool accecn_reflector) { struct tcp_sock *tp = tcp_sk(sk); struct net *net = sock_net(sk); @@ -3864,7 +3943,9 @@ static void tcp_send_challenge_ack(struct sock *sk) WRITE_ONCE(net->ipv4.tcp_challenge_count, count - 1); send_ack: NET_INC_STATS(net, LINUX_MIB_TCPCHALLENGEACK); - tcp_send_ack(sk); + __tcp_send_ack(sk, tp->rcv_nxt, + !accecn_reflector ? 0 : + tcp_accecn_reflector_flags(tp->syn_ect_rcv)); } }
@@ -4031,7 +4112,7 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) /* RFC 5961 5.2 [Blind Data Injection Attack].[Mitigation] */ if (before(ack, prior_snd_una - max_window)) { if (!(flag & FLAG_NO_CHALLENGE_ACK)) - tcp_send_challenge_ack(sk); + tcp_send_challenge_ack(sk, false); return -SKB_DROP_REASON_TCP_TOO_OLD_ACK; } goto old_ack; @@ -6025,8 +6106,7 @@ static void tcp_urg(struct sock *sk, struct sk_buff *skb, const struct tcphdr *t }
/* Updates Accurate ECN received counters from the received IP ECN field */ -static void tcp_ecn_received_counters(struct sock *sk, - const struct sk_buff *skb) +void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb) { u8 ecnfield = TCP_SKB_CB(skb)->ip_dsfield & INET_ECN_MASK; u8 is_ce = INET_ECN_is_ce(ecnfield); @@ -6067,6 +6147,7 @@ static bool tcp_reset_check(const struct sock *sk, const struct sk_buff *skb) static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb, const struct tcphdr *th, int syn_inerr) { + bool send_accecn_reflector = false; struct tcp_sock *tp = tcp_sk(sk); SKB_DR(reason);
@@ -6160,7 +6241,7 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb, if (tp->syn_fastopen && !tp->data_segs_in && sk->sk_state == TCP_ESTABLISHED) tcp_fastopen_active_disable(sk); - tcp_send_challenge_ack(sk); + tcp_send_challenge_ack(sk, false); SKB_DR_SET(reason, TCP_RESET); goto discard; } @@ -6171,16 +6252,27 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb, * RFC 5961 4.2 : Send a challenge ack */ if (th->syn) { + if (tcp_ecn_mode_accecn(tp)) + send_accecn_reflector = true; if (sk->sk_state == TCP_SYN_RECV && sk->sk_socket && th->ack && TCP_SKB_CB(skb)->seq + 1 == TCP_SKB_CB(skb)->end_seq && TCP_SKB_CB(skb)->seq + 1 == tp->rcv_nxt && - TCP_SKB_CB(skb)->ack_seq == tp->snd_nxt) + TCP_SKB_CB(skb)->ack_seq == tp->snd_nxt) { + if (!tcp_ecn_disabled(tp)) { + u8 ect = tp->syn_ect_rcv; + + tp->wait_third_ack = true; + __tcp_send_ack(sk, tp->rcv_nxt, + !send_accecn_reflector ? 0 : + tcp_accecn_reflector_flags(ect)); + } goto pass; + } syn_challenge: if (syn_inerr) TCP_INC_STATS(sock_net(sk), TCP_MIB_INERRS); NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNCHALLENGE); - tcp_send_challenge_ack(sk); + tcp_send_challenge_ack(sk, send_accecn_reflector); SKB_DR_SET(reason, TCP_INVALID_SYN); goto discard; } @@ -6393,6 +6485,12 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb) return;
step5: + if (unlikely(tp->wait_third_ack)) { + tp->wait_third_ack = 0; + if (tcp_ecn_mode_accecn(tp)) + tcp_accecn_third_ack(sk, skb, tp->syn_ect_snt); + tcp_fast_path_on(tp); + } tcp_ecn_received_counters(sk, skb);
reason = tcp_ack(sk, skb, FLAG_SLOWPATH | FLAG_UPDATE_TS_RECENT); @@ -6645,7 +6743,8 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb, * state to ESTABLISHED..." */
- tcp_ecn_rcv_synack(tp, th); + if (tcp_ecn_mode_any(tp)) + tcp_ecn_rcv_synack(sk, th, TCP_SKB_CB(skb)->ip_dsfield);
tcp_init_wl(tp, TCP_SKB_CB(skb)->seq); tcp_try_undo_spurious_syn(sk); @@ -6717,7 +6816,9 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb, TCP_DELACK_MAX, false); goto consume; } - tcp_send_ack(sk); + __tcp_send_ack(sk, tp->rcv_nxt, + !tcp_ecn_mode_accecn(tp) ? 0 : + tcp_accecn_reflector_flags(tp->syn_ect_rcv)); return -1; }
@@ -6776,7 +6877,7 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb, tp->snd_wl1 = TCP_SKB_CB(skb)->seq; tp->max_window = tp->snd_wnd;
- tcp_ecn_rcv_syn(tp, th); + tcp_ecn_rcv_syn(tp, th, skb);
tcp_mtup_init(sk); tcp_sync_mss(sk, icsk->icsk_pmtu_cookie); @@ -6958,7 +7059,7 @@ tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb) } /* accept old ack during closing */ if ((int)reason < 0) { - tcp_send_challenge_ack(sk); + tcp_send_challenge_ack(sk, false); reason = -reason; goto discard; } @@ -7005,9 +7106,16 @@ tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb) tp->lsndtime = tcp_jiffies32;
tcp_initialize_rcv_mss(sk); - tcp_fast_path_on(tp); + if (likely(!tp->wait_third_ack)) { + if (tcp_ecn_mode_accecn(tp)) + tcp_accecn_third_ack(sk, skb, tp->syn_ect_snt); + tcp_fast_path_on(tp); + } if (sk->sk_shutdown & SEND_SHUTDOWN) tcp_shutdown(sk, SEND_SHUTDOWN); + + if (sk->sk_socket && tp->wait_third_ack) + goto consume; break;
case TCP_FIN_WAIT1: { @@ -7177,6 +7285,15 @@ static void tcp_ecn_create_request(struct request_sock *req, bool ect, ecn_ok; u32 ecn_ok_dst;
+ if (tcp_accecn_syn_requested(th) && + (net->ipv4.sysctl_tcp_ecn >= 3 || tcp_ca_needs_accecn(listen_sk))) { + inet_rsk(req)->ecn_ok = 1; + tcp_rsk(req)->accecn_ok = 1; + tcp_rsk(req)->syn_ect_rcv = TCP_SKB_CB(skb)->ip_dsfield & + INET_ECN_MASK; + return; + } + if (!th_ecn) return;
@@ -7184,7 +7301,8 @@ static void tcp_ecn_create_request(struct request_sock *req, ecn_ok_dst = dst_feature(dst, DST_FEATURE_ECN_MASK); ecn_ok = READ_ONCE(net->ipv4.sysctl_tcp_ecn) || ecn_ok_dst;
- if (((!ect || th->res1) && ecn_ok) || tcp_ca_needs_ecn(listen_sk) || + if (((!ect || th->res1 || th->ae) && ecn_ok) || + tcp_ca_needs_ecn(listen_sk) || (ecn_ok_dst & DST_FEATURE_ECN_CA) || tcp_bpf_ca_needs_ecn((struct sock *)req)) inet_rsk(req)->ecn_ok = 1; @@ -7202,6 +7320,9 @@ static void tcp_openreq_init(struct request_sock *req, tcp_rsk(req)->snt_synack = 0; tcp_rsk(req)->snt_tsval_first = 0; tcp_rsk(req)->last_oow_ack_time = 0; + tcp_rsk(req)->accecn_ok = 0; + tcp_rsk(req)->syn_ect_rcv = 0; + tcp_rsk(req)->syn_ect_snt = 0; req->mss = rx_opt->mss_clamp; req->ts_recent = rx_opt->saw_tstamp ? rx_opt->rcv_tsval : 0; ireq->tstamp_ok = rx_opt->tstamp_ok; diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index d5b5c32115d2..5c5d4b94b59c 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1189,7 +1189,7 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst, enum tcp_synack_type synack_type, struct sk_buff *syn_skb) { - const struct inet_request_sock *ireq = inet_rsk(req); + struct inet_request_sock *ireq = inet_rsk(req); struct flowi4 fl4; int err = -1; struct sk_buff *skb; @@ -1202,6 +1202,7 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst, skb = tcp_make_synack(sk, dst, req, foc, synack_type, syn_skb);
if (skb) { + tcp_rsk(req)->syn_ect_snt = inet_sk(sk)->tos & INET_ECN_MASK; __tcp_v4_send_check(skb, ireq->ir_loc_addr, ireq->ir_rmt_addr);
tos = READ_ONCE(inet_sk(sk)->tos); diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index 43d7852ce07e..779a206a5ca6 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -461,12 +461,51 @@ void tcp_openreq_init_rwin(struct request_sock *req, ireq->rcv_wscale = rcv_wscale; }
-static void tcp_ecn_openreq_child(struct tcp_sock *tp, - const struct request_sock *req) +void tcp_accecn_third_ack(struct sock *sk, const struct sk_buff *skb, + u8 syn_ect_snt) { - tcp_ecn_mode_set(tp, inet_rsk(req)->ecn_ok ? - TCP_ECN_MODE_RFC3168 : - TCP_ECN_DISABLED); + u8 ace = tcp_accecn_ace(tcp_hdr(skb)); + struct tcp_sock *tp = tcp_sk(sk); + + switch (ace) { + case 0x0: + tcp_accecn_fail_mode_set(tp, TCP_ACCECN_ACE_FAIL_RECV); + break; + case 0x7: + case 0x5: + case 0x1: + /* Unused but legal values */ + break; + default: + /* Validation only applies to first non-data packet */ + if (TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq && + !TCP_SKB_CB(skb)->sacked && + tcp_accecn_validate_syn_feedback(sk, ace, syn_ect_snt)) { + if ((tcp_accecn_extract_syn_ect(ace) == INET_ECN_CE) && + !tp->delivered_ce) + tp->delivered_ce++; + } + break; + } +} + +static void tcp_ecn_openreq_child(struct sock *sk, + const struct request_sock *req, + const struct sk_buff *skb) +{ + const struct tcp_request_sock *treq = tcp_rsk(req); + struct tcp_sock *tp = tcp_sk(sk); + + if (treq->accecn_ok) { + tcp_ecn_mode_set(tp, TCP_ECN_MODE_ACCECN); + tp->syn_ect_snt = treq->syn_ect_snt; + tcp_accecn_third_ack(sk, skb, treq->syn_ect_snt); + tcp_ecn_received_counters(sk, skb); + } else { + tcp_ecn_mode_set(tp, inet_rsk(req)->ecn_ok ? + TCP_ECN_MODE_RFC3168 : + TCP_ECN_DISABLED); + } }
void tcp_ca_openreq_child(struct sock *sk, const struct dst_entry *dst) @@ -631,7 +670,7 @@ struct sock *tcp_create_openreq_child(const struct sock *sk, if (skb->len >= TCP_MSS_DEFAULT + newtp->tcp_header_len) newicsk->icsk_ack.last_seg_size = skb->len - newtp->tcp_header_len; newtp->rx_opt.mss_clamp = req->mss; - tcp_ecn_openreq_child(newtp, req); + tcp_ecn_openreq_child(newsk, req, skb); newtp->fastopen_req = NULL; RCU_INIT_POINTER(newtp->fastopen_rsk, NULL);
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 9c978d12c7cf..b4eac0725682 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -322,7 +322,7 @@ static u16 tcp_select_window(struct sock *sk) /* Packet ECN state for a SYN-ACK */ static void tcp_ecn_send_synack(struct sock *sk, struct sk_buff *skb) { - const struct tcp_sock *tp = tcp_sk(sk); + struct tcp_sock *tp = tcp_sk(sk);
TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_CWR; if (tcp_ecn_disabled(tp)) @@ -330,6 +330,13 @@ static void tcp_ecn_send_synack(struct sock *sk, struct sk_buff *skb) else if (tcp_ca_needs_ecn(sk) || tcp_bpf_ca_needs_ecn(sk)) INET_ECN_xmit(sk); + + if (tp->ecn_flags & TCP_ECN_MODE_ACCECN) { + TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_ACE; + TCP_SKB_CB(skb)->tcp_flags |= + tcp_accecn_reflector_flags(tp->syn_ect_rcv); + tp->syn_ect_snt = inet_sk(sk)->tos & INET_ECN_MASK; + } }
/* Packet ECN state for a SYN. */ @@ -337,8 +344,20 @@ static void tcp_ecn_send_syn(struct sock *sk, struct sk_buff *skb) { struct tcp_sock *tp = tcp_sk(sk); bool bpf_needs_ecn = tcp_bpf_ca_needs_ecn(sk); - bool use_ecn = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_ecn) == 1 || - tcp_ca_needs_ecn(sk) || bpf_needs_ecn; + bool use_ecn, use_accecn; + u8 tcp_ecn = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_ecn); + + /* ============== ========================== + * tcp_ecn values Outgoing connections + * ============== ========================== + * 0,2,5 Do not request ECN + * 1,4 Request ECN connection + * 3 Request AccECN connection + * ============== ========================== + */ + use_accecn = tcp_ecn == 3 || tcp_ca_needs_accecn(sk); + use_ecn = tcp_ecn == 1 || tcp_ecn == 4 || + tcp_ca_needs_ecn(sk) || bpf_needs_ecn || use_accecn;
if (!use_ecn) { const struct dst_entry *dst = __sk_dst_get(sk); @@ -354,35 +373,58 @@ static void tcp_ecn_send_syn(struct sock *sk, struct sk_buff *skb) INET_ECN_xmit(sk);
TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_ECE | TCPHDR_CWR; - tcp_ecn_mode_set(tp, TCP_ECN_MODE_RFC3168); + if (use_accecn) { + TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_AE; + tcp_ecn_mode_set(tp, TCP_ECN_MODE_PENDING); + tp->syn_ect_snt = inet_sk(sk)->tos & INET_ECN_MASK; + } else { + tcp_ecn_mode_set(tp, TCP_ECN_MODE_RFC3168); + } } }
static void tcp_ecn_clear_syn(struct sock *sk, struct sk_buff *skb) { - if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_ecn_fallback)) + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_ecn_fallback)) { /* tp->ecn_flags are cleared at a later point in time when * SYN ACK is ultimatively being received. */ - TCP_SKB_CB(skb)->tcp_flags &= ~(TCPHDR_ECE | TCPHDR_CWR); + TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_ACE; + } +} + +static void tcp_accecn_echo_syn_ect(struct tcphdr *th, u8 ect) +{ + th->ae = !!(ect & INET_ECN_ECT_0); + th->cwr = ect != INET_ECN_ECT_0; + th->ece = ect == INET_ECN_ECT_1; }
static void tcp_ecn_make_synack(const struct request_sock *req, struct tcphdr *th) { - if (inet_rsk(req)->ecn_ok) + if (tcp_rsk(req)->accecn_ok) + tcp_accecn_echo_syn_ect(th, tcp_rsk(req)->syn_ect_rcv); + else if (inet_rsk(req)->ecn_ok) th->ece = 1; }
-static void tcp_accecn_set_ace(struct tcphdr *th, struct tcp_sock *tp) +static void tcp_accecn_set_ace(struct tcp_sock *tp, struct sk_buff *skb, + struct tcphdr *th) { u32 wire_ace;
- wire_ace = tp->received_ce + TCP_ACCECN_CEP_INIT_OFFSET; - th->ece = !!(wire_ace & 0x1); - th->cwr = !!(wire_ace & 0x2); - th->ae = !!(wire_ace & 0x4); - tp->received_ce_pending = 0; + /* The final packet of the 3WHS or anything like it must reflect + * the SYN/ACK ECT instead of putting CEP into ACE field, such + * case show up in tcp_flags. + */ + if (likely(!(TCP_SKB_CB(skb)->tcp_flags & TCPHDR_ACE))) { + wire_ace = tp->received_ce + TCP_ACCECN_CEP_INIT_OFFSET; + th->ece = !!(wire_ace & 0x1); + th->cwr = !!(wire_ace & 0x2); + th->ae = !!(wire_ace & 0x4); + tp->received_ce_pending = 0; + } }
/* Set up ECN state for a packet on a ESTABLISHED socket that is about to @@ -396,9 +438,10 @@ static void tcp_ecn_send(struct sock *sk, struct sk_buff *skb, if (!tcp_ecn_mode_any(tp)) return;
- INET_ECN_xmit(sk); + if (!tcp_accecn_ace_fail_recv(tp)) + INET_ECN_xmit(sk); if (tcp_ecn_mode_accecn(tp)) { - tcp_accecn_set_ace(th, tp); + tcp_accecn_set_ace(tp, skb, th); skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ACCECN; } else { /* Not-retransmitted data segment: set ECT and inject CWR. */ @@ -3414,7 +3457,10 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs) tcp_retrans_try_collapse(sk, skb, avail_wnd); }
- /* RFC3168, section 6.1.1.1. ECN fallback */ + /* RFC3168, section 6.1.1.1. ECN fallback + * As AccECN uses the same SYN flags (+ AE), this check covers both + * cases. + */ if ((TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN_ECN) == TCPHDR_SYN_ECN) tcp_ecn_clear_syn(sk, skb);
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c index 9d83eadd308b..50046460ee0b 100644 --- a/net/ipv6/syncookies.c +++ b/net/ipv6/syncookies.c @@ -264,6 +264,7 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb) if (!req->syncookie) ireq->rcv_wscale = rcv_wscale; ireq->ecn_ok &= cookie_ecn_ok(net, dst); + tcp_rsk(req)->accecn_ok = ireq->ecn_ok && cookie_accecn_ok(th);
ret = tcp_get_cookie_sock(sk, skb, req, dst); if (!ret) { diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 7dcb33f879ee..34381f94f3ca 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -542,6 +542,7 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst, skb = tcp_make_synack(sk, dst, req, foc, synack_type, syn_skb);
if (skb) { + tcp_rsk(req)->syn_ect_snt = np->tclass & INET_ECN_MASK; __tcp_v6_send_check(skb, &ireq->ir_v6_loc_addr, &ireq->ir_v6_rmt_addr);
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index e36018203bd0..af38fff24aa4 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -156,6 +156,10 @@ struct tcp_request_sock { #if IS_ENABLED(CONFIG_MPTCP) bool drop_req; #endif
- u8 accecn_ok : 1,
syn_ect_snt: 2,
syn_ect_rcv: 2;
- u8 accecn_fail_mode:4;
AFAICS this will create a 3 bytes hole. That could be bad if it will also increase the number of cachelines used by struct tcp_request_sock. Please include the pahole info and struct size in the commit message.
If there is no size problem I guess you are better off using a 'bool' for 'accecn_ok'
u32 txhash; u32 rcv_isn; u32 snt_isn; @@ -376,7 +380,10 @@ struct tcp_sock { u8 compressed_ack; u8 dup_ack_counter:2, tlp_retrans:1, /* TLP is a retransmission */
unused:5;
syn_ect_snt:2, /* AccECN ECT memory, only */
syn_ect_rcv:2, /* ... needed durign 3WHS + first seqno */
wait_third_ack:1; /* Wait 3rd ACK in simultaneous open */
A good bunch of conditionals will be added to the fast path checking this flag. Is simult open really a thing for AccECN? Can we simple disable AccECN in such scenarios and simplify the code a bit? In my limited experience only syzkaller reliably use it.
- u8 accecn_fail_mode:4; /* AccECN failure handling */
This is outside the fastpath area, so possibly the struct size increase is less critical, but AFAICS this will create a 6bits hole (as the next u8 has only 6bit used). I think it's better to read the 'unused' field to mark such hole.
u8 thin_lto : 1,/* Use linear timeouts for thin streams */ fastopen_connect:1, /* FASTOPEN_CONNECT sockopt */ fastopen_no_cookie:1, /* Allow send/recv SYN+data without a cookie */
[...]
+/* See Table 2 of the AccECN draft */ +static void tcp_ecn_rcv_synack(struct sock *sk, const struct tcphdr *th,
u8 ip_dsfield)
+{
- struct tcp_sock *tp = tcp_sk(sk);
- u8 ace = tcp_accecn_ace(th);
- switch (ace) {
- case 0x0:
- case 0x7: tcp_ecn_mode_set(tp, TCP_ECN_DISABLED);
break;
- case 0x1:
- case 0x5:
Possibly some human readable defines could help instead of magic numbers here.
[...]
@@ -6171,16 +6252,27 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb, * RFC 5961 4.2 : Send a challenge ack */ if (th->syn) {
if (tcp_ecn_mode_accecn(tp))
if (sk->sk_state == TCP_SYN_RECV && sk->sk_socket && th->ack && TCP_SKB_CB(skb)->seq + 1 == TCP_SKB_CB(skb)->end_seq && TCP_SKB_CB(skb)->seq + 1 == tp->rcv_nxt &&send_accecn_reflector = true;
TCP_SKB_CB(skb)->ack_seq == tp->snd_nxt)
TCP_SKB_CB(skb)->ack_seq == tp->snd_nxt) {
if (!tcp_ecn_disabled(tp)) {
u8 ect = tp->syn_ect_rcv;
tp->wait_third_ack = true;
__tcp_send_ack(sk, tp->rcv_nxt,
!send_accecn_reflector ? 0 :
tcp_accecn_reflector_flags(ect));
The same expression is used above possibly you can create a new helper for this statement.
...
This patch is quite huge. Any hope to break id down to a more palatable size? i.e. moving the 3rd ack/self connect handling to a separate patch (if that thing is really needed).
/P
-----Original Message----- From: Paolo Abeni pabeni@redhat.com Sent: Tuesday, April 29, 2025 12:37 PM To: Chia-Yu Chang (Nokia) chia-yu.chang@nokia-bell-labs.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ij@kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Cc: Olivier Tilmans (Nokia) olivier.tilmans@nokia.com Subject: Re: [PATCH v5 net-next 04/15] tcp: accecn: AccECN negotiation
CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index e36018203bd0..af38fff24aa4 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -156,6 +156,10 @@ struct tcp_request_sock { #if IS_ENABLED(CONFIG_MPTCP) bool drop_req; #endif
u8 accecn_ok : 1,
syn_ect_snt: 2,
syn_ect_rcv: 2;
u8 accecn_fail_mode:4;
AFAICS this will create a 3 bytes hole. That could be bad if it will also increase the number of cachelines used by struct tcp_request_sock. Please include the pahole info and struct size in the commit message.
If there is no size problem I guess you are better off using a 'bool' for 'accecn_ok'
Hi Paolo,
Thanks for the feedback I will include the pahole in the message and see whether I can move to reduce the size of holes.
u32 txhash; u32 rcv_isn; u32 snt_isn;
@@ -376,7 +380,10 @@ struct tcp_sock { u8 compressed_ack; u8 dup_ack_counter:2, tlp_retrans:1, /* TLP is a retransmission */
unused:5;
syn_ect_snt:2, /* AccECN ECT memory, only */
syn_ect_rcv:2, /* ... needed durign 3WHS + first seqno */
wait_third_ack:1; /* Wait 3rd ACK in simultaneous open
- */
A good bunch of conditionals will be added to the fast path checking this flag. Is simult open really a thing for AccECN? Can we simple disable AccECN in such scenarios and simplify the code a bit? In my limited experience only syzkaller reliably use it.
There are few simulateneous open testcase for AccECN in packtetdrill: https://github.com/minuscat/packetdrill_accecn/tree/main/gtests/net/tcp/acce...
u8 accecn_fail_mode:4; /* AccECN failure handling */
This is outside the fastpath area, so possibly the struct size increase is less critical, but AFAICS this will create a 6bits hole (as the next u8 has only 6bit used). I think it's better to read the 'unused' field to mark such hole.
Sure, will take action in the next version, either provide pahole in commit message or read the unsued field to make such hole.
u8 thin_lto : 1,/* Use linear timeouts for thin streams */ fastopen_connect:1, /* FASTOPEN_CONNECT sockopt */ fastopen_no_cookie:1, /* Allow send/recv SYN+data
without a cookie */
[...]
+/* See Table 2 of the AccECN draft */ static void +tcp_ecn_rcv_synack(struct sock *sk, const struct tcphdr *th,
u8 ip_dsfield) {
struct tcp_sock *tp = tcp_sk(sk);
u8 ace = tcp_accecn_ace(th);
switch (ace) {
case 0x0:
case 0x7: tcp_ecn_mode_set(tp, TCP_ECN_DISABLED);
break;
case 0x1:
case 0x5:
Possibly some human readable defines could help instead of magic numbers here.
Sure, I will add comments here.
[...]
@@ -6171,16 +6252,27 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb, * RFC 5961 4.2 : Send a challenge ack */ if (th->syn) {
if (tcp_ecn_mode_accecn(tp))
send_accecn_reflector = true; if (sk->sk_state == TCP_SYN_RECV && sk->sk_socket && th->ack && TCP_SKB_CB(skb)->seq + 1 == TCP_SKB_CB(skb)->end_seq && TCP_SKB_CB(skb)->seq + 1 == tp->rcv_nxt &&
TCP_SKB_CB(skb)->ack_seq == tp->snd_nxt)
TCP_SKB_CB(skb)->ack_seq == tp->snd_nxt) {
if (!tcp_ecn_disabled(tp)) {
u8 ect = tp->syn_ect_rcv;
tp->wait_third_ack = true;
__tcp_send_ack(sk, tp->rcv_nxt,
!send_accecn_reflector ? 0 :
- tcp_accecn_reflector_flags(ect));
The same expression is used above possibly you can create a new helper for this statement.
OK, will do that.
...
This patch is quite huge. Any hope to break id down to a more palatable size? i.e. moving the 3rd ack/self connect handling to a separate patch (if that thing is really needed).
I am ok to make a practce on reducing the number of patch in this series, but this series shall be the key for AccECN: ACE bitfield, TCP option, fallback, error handling, etc. Or if you have any suggestions, I am find to take actions. And after this series, we still have 15 patches including several corner case handling in RFC, doucmentation, etc.
Chia-Yu
/P
From: Ilpo Järvinen ij@kernel.org
These counters track IP ECN field payload byte sums for all arriving (acceptable) packets. The AccECN option (added by a later patch in the series) echoes these counters back to sender side.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Neal Cardwell ncardwell@google.com Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- include/linux/tcp.h | 1 + include/net/tcp.h | 18 +++++++++++++++++- net/ipv4/tcp.c | 3 ++- net/ipv4/tcp_input.c | 13 +++++++++---- net/ipv4/tcp_minisocks.c | 3 ++- 5 files changed, 31 insertions(+), 7 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index af38fff24aa4..9cbfefd693e3 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -303,6 +303,7 @@ struct tcp_sock { u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */ u32 received_ce; /* Like the above but for rcvd CE marked pkts */ + u32 received_ecn_bytes[3]; u8 received_ce_pending:4, /* Not yet transmit cnt of received_ce */ unused2:4; u32 app_limited; /* limited until "delivered" reaches this val */ diff --git a/include/net/tcp.h b/include/net/tcp.h index f36a1a3d538f..6ffa4ae085db 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -467,7 +467,8 @@ static inline int tcp_accecn_extract_syn_ect(u8 ace) bool tcp_accecn_validate_syn_feedback(struct sock *sk, u8 ace, u8 sent_ect); void tcp_accecn_third_ack(struct sock *sk, const struct sk_buff *skb, u8 syn_ect_snt); -void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb); +void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb, + u32 payload_len);
enum tcp_tw_status { TCP_TW_SUCCESS = 0, @@ -1035,11 +1036,26 @@ static inline u32 tcp_rsk_tsval(const struct tcp_request_sock *treq) * See draft-ietf-tcpm-accurate-ecn for the latest values. */ #define TCP_ACCECN_CEP_INIT_OFFSET 5 +#define TCP_ACCECN_E1B_INIT_OFFSET 1 +#define TCP_ACCECN_E0B_INIT_OFFSET 1 +#define TCP_ACCECN_CEB_INIT_OFFSET 0 + +static inline void __tcp_accecn_init_bytes_counters(int *counter_array) +{ + BUILD_BUG_ON(INET_ECN_ECT_1 != 0x1); + BUILD_BUG_ON(INET_ECN_ECT_0 != 0x2); + BUILD_BUG_ON(INET_ECN_CE != 0x3); + + counter_array[INET_ECN_ECT_1 - 1] = 0; + counter_array[INET_ECN_ECT_0 - 1] = 0; + counter_array[INET_ECN_CE - 1] = 0; +}
static inline void tcp_accecn_init_counters(struct tcp_sock *tp) { tp->received_ce = 0; tp->received_ce_pending = 0; + __tcp_accecn_init_bytes_counters(tp->received_ecn_bytes); }
/* State flags for sacked in struct tcp_skb_cb */ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 73f8cc715bff..1e21bdf43f23 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -5092,6 +5092,7 @@ static void __init tcp_struct_check(void) CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, delivered); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, delivered_ce); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, received_ce); + CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, received_ecn_bytes); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, app_limited); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, rcv_wnd); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, rx_opt); @@ -5099,7 +5100,7 @@ static void __init tcp_struct_check(void) /* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */ - CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 97 + 7); + CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 109 + 7);
/* RX read-write hotpath cache lines */ CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_rx, bytes_received); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index cc34664805f8..c017e342f092 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -6106,7 +6106,8 @@ static void tcp_urg(struct sock *sk, struct sk_buff *skb, const struct tcphdr *t }
/* Updates Accurate ECN received counters from the received IP ECN field */ -void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb) +void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb, + u32 payload_len) { u8 ecnfield = TCP_SKB_CB(skb)->ip_dsfield & INET_ECN_MASK; u8 is_ce = INET_ECN_is_ce(ecnfield); @@ -6121,6 +6122,9 @@ void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb) tp->received_ce += pcount; tp->received_ce_pending = min(tp->received_ce_pending + pcount, 0xfU); + + if (payload_len > 0) + tp->received_ecn_bytes[ecnfield - 1] += payload_len; } }
@@ -6398,7 +6402,7 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb) flag |= __tcp_replace_ts_recent(tp, delta);
- tcp_ecn_received_counters(sk, skb); + tcp_ecn_received_counters(sk, skb, 0);
/* We know that such packets are checksummed * on entry. @@ -6444,7 +6448,8 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb) /* Bulk data transfer: receiver */ tcp_cleanup_skb(skb); __skb_pull(skb, tcp_header_len); - tcp_ecn_received_counters(sk, skb); + tcp_ecn_received_counters(sk, skb, + len - tcp_header_len); eaten = tcp_queue_rcv(sk, skb, &fragstolen);
tcp_event_data_recv(sk, skb); @@ -6491,7 +6496,7 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb) tcp_accecn_third_ack(sk, skb, tp->syn_ect_snt); tcp_fast_path_on(tp); } - tcp_ecn_received_counters(sk, skb); + tcp_ecn_received_counters(sk, skb, len - th->doff * 4);
reason = tcp_ack(sk, skb, FLAG_SLOWPATH | FLAG_UPDATE_TS_RECENT); if ((int)reason < 0) { diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index 779a206a5ca6..3f8225bae49f 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -497,10 +497,11 @@ static void tcp_ecn_openreq_child(struct sock *sk, struct tcp_sock *tp = tcp_sk(sk);
if (treq->accecn_ok) { + const struct tcphdr *th = (const struct tcphdr *)skb->data; tcp_ecn_mode_set(tp, TCP_ECN_MODE_ACCECN); tp->syn_ect_snt = treq->syn_ect_snt; tcp_accecn_third_ack(sk, skb, treq->syn_ect_snt); - tcp_ecn_received_counters(sk, skb); + tcp_ecn_received_counters(sk, skb, skb->len - th->doff * 4); } else { tcp_ecn_mode_set(tp, inet_rsk(req)->ecn_ok ? TCP_ECN_MODE_RFC3168 :
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index af38fff24aa4..9cbfefd693e3 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -303,6 +303,7 @@ struct tcp_sock { u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */ u32 received_ce; /* Like the above but for rcvd CE marked pkts */
- u32 received_ecn_bytes[3];
I'm unsure if this should belong to the fast-path area. In any case AFAICS this is the wrong location, as the fields are only written and only in the rx path, while the above chunk belongs to the tcp_sock_write_txrx group.
/P
From: Ilpo Järvinen ij@kernel.org
AccECN byte counter estimation requires delivered bytes which can be calculated while processing SACK blocks and cumulative ACK. The delivered bytes will be used to estimate the byte counters between AccECN option (on ACKs w/o the option).
Non-SACK calculation is quite annoying, inaccurate, and likely bogus.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- net/ipv4/tcp_input.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index c017e342f092..5bd7fc9bcf66 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -1170,6 +1170,7 @@ struct tcp_sacktag_state { u64 last_sackt; u32 reord; u32 sack_delivered; + u32 delivered_bytes; int flag; unsigned int mss_now; struct rate_sample *rate; @@ -1531,7 +1532,7 @@ static int tcp_match_skb_to_sack(struct sock *sk, struct sk_buff *skb, static u8 tcp_sacktag_one(struct sock *sk, struct tcp_sacktag_state *state, u8 sacked, u32 start_seq, u32 end_seq, - int dup_sack, int pcount, + int dup_sack, int pcount, u32 plen, u64 xmit_time) { struct tcp_sock *tp = tcp_sk(sk); @@ -1591,6 +1592,7 @@ static u8 tcp_sacktag_one(struct sock *sk, tp->sacked_out += pcount; /* Out-of-order packets delivered */ state->sack_delivered += pcount; + state->delivered_bytes += plen;
/* Lost marker hint past SACKed? Tweak RFC3517 cnt */ if (tp->lost_skb_hint && @@ -1632,7 +1634,7 @@ static bool tcp_shifted_skb(struct sock *sk, struct sk_buff *prev, * tcp_highest_sack_seq() when skb is highest_sack. */ tcp_sacktag_one(sk, state, TCP_SKB_CB(skb)->sacked, - start_seq, end_seq, dup_sack, pcount, + start_seq, end_seq, dup_sack, pcount, skb->len, tcp_skb_timestamp_us(skb)); tcp_rate_skb_delivered(sk, skb, state->rate);
@@ -1924,6 +1926,7 @@ static struct sk_buff *tcp_sacktag_walk(struct sk_buff *skb, struct sock *sk, TCP_SKB_CB(skb)->end_seq, dup_sack, tcp_skb_pcount(skb), + skb->len, tcp_skb_timestamp_us(skb)); tcp_rate_skb_delivered(sk, skb, state->rate); if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED) @@ -3540,6 +3543,8 @@ static int tcp_clean_rtx_queue(struct sock *sk, const struct sk_buff *ack_skb,
if (sacked & TCPCB_SACKED_ACKED) { tp->sacked_out -= acked_pcount; + /* snd_una delta covers these skbs */ + sack->delivered_bytes -= skb->len; } else if (tcp_is_sack(tp)) { tcp_count_delivered(tp, acked_pcount, ece_ack); if (!tcp_skb_spurious_retrans(tp, skb)) @@ -3643,6 +3648,10 @@ static int tcp_clean_rtx_queue(struct sock *sk, const struct sk_buff *ack_skb, delta = prior_sacked - tp->sacked_out; tp->lost_cnt_hint -= min(tp->lost_cnt_hint, delta); } + + sack->delivered_bytes = (skb ? + TCP_SKB_CB(skb)->seq : tp->snd_una) - + prior_snd_una; } else if (skb && rtt_update && sack_rtt_us >= 0 && sack_rtt_us > tcp_stamp_us_delta(tp->tcp_mstamp, tcp_skb_timestamp_us(skb))) { @@ -4097,6 +4106,7 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) sack_state.first_sackt = 0; sack_state.rate = &rs; sack_state.sack_delivered = 0; + sack_state.delivered_bytes = 0;
/* We very likely will need to access rtx queue. */ prefetch(sk->tcp_rtx_queue.rb_node);
From: Ilpo Järvinen ij@kernel.org
There is some waste space in the option usage due to padding of 32-bit fields. AccECN option can take advantage of those few bytes as its tail is often consuming just a few odd bytes.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- net/ipv4/tcp_output.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index b4eac0725682..d63f505a30e2 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -709,6 +709,8 @@ static __be32 *process_tcp_ao_options(struct tcp_sock *tp, return ptr; }
+#define NOP_LEFTOVER ((TCPOPT_NOP << 8) | TCPOPT_NOP) + /* Write previously computed TCP options to the packet. * * Beware: Something in the Internet is very sensitive to the ordering of @@ -727,8 +729,10 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, struct tcp_out_options *opts, struct tcp_key *key) { + u16 leftover_bytes = NOP_LEFTOVER; /* replace next NOPs if avail */ __be32 *ptr = (__be32 *)(th + 1); u16 options = opts->options; /* mungable copy */ + int leftover_size = 2;
if (tcp_key_is_md5(key)) { *ptr++ = htonl((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16) | @@ -763,17 +767,22 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, }
if (unlikely(OPTION_SACK_ADVERTISE & options)) { - *ptr++ = htonl((TCPOPT_NOP << 24) | - (TCPOPT_NOP << 16) | + *ptr++ = htonl((leftover_bytes << 16) | (TCPOPT_SACK_PERM << 8) | TCPOLEN_SACK_PERM); + leftover_bytes = NOP_LEFTOVER; }
if (unlikely(OPTION_WSCALE & options)) { - *ptr++ = htonl((TCPOPT_NOP << 24) | + u8 highbyte = TCPOPT_NOP; + + if (unlikely(leftover_size == 1)) + highbyte = leftover_bytes >> 8; + *ptr++ = htonl((highbyte << 24) | (TCPOPT_WINDOW << 16) | (TCPOLEN_WINDOW << 8) | opts->ws); + leftover_bytes = NOP_LEFTOVER; }
if (unlikely(opts->num_sack_blocks)) { @@ -781,8 +790,7 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, tp->duplicate_sack : tp->selective_acks; int this_sack;
- *ptr++ = htonl((TCPOPT_NOP << 24) | - (TCPOPT_NOP << 16) | + *ptr++ = htonl((leftover_bytes << 16) | (TCPOPT_SACK << 8) | (TCPOLEN_SACK_BASE + (opts->num_sack_blocks * TCPOLEN_SACK_PERBLOCK))); @@ -794,6 +802,10 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, }
tp->rx_opt.dsack = 0; + } else if (unlikely(leftover_bytes != NOP_LEFTOVER)) { + *ptr++ = htonl((leftover_bytes << 16) | + (TCPOPT_NOP << 8) | + TCPOPT_NOP); }
if (unlikely(OPTION_FAST_OPEN_COOKIE & options)) {
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -709,6 +709,8 @@ static __be32 *process_tcp_ao_options(struct tcp_sock *tp, return ptr; } +#define NOP_LEFTOVER ((TCPOPT_NOP << 8) | TCPOPT_NOP)
/* Write previously computed TCP options to the packet.
- Beware: Something in the Internet is very sensitive to the ordering of
@@ -727,8 +729,10 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, struct tcp_out_options *opts, struct tcp_key *key) {
- u16 leftover_bytes = NOP_LEFTOVER; /* replace next NOPs if avail */ __be32 *ptr = (__be32 *)(th + 1); u16 options = opts->options; /* mungable copy */
- int leftover_size = 2;
if (tcp_key_is_md5(key)) { *ptr++ = htonl((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16) | @@ -763,17 +767,22 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, } if (unlikely(OPTION_SACK_ADVERTISE & options)) {
*ptr++ = htonl((TCPOPT_NOP << 24) |
(TCPOPT_NOP << 16) |
*ptr++ = htonl((leftover_bytes << 16) | (TCPOPT_SACK_PERM << 8) | TCPOLEN_SACK_PERM);
leftover_bytes = NOP_LEFTOVER;
Why? isn't leftover_bytes already == NOP_LEFTOVER?
} if (unlikely(OPTION_WSCALE & options)) {
*ptr++ = htonl((TCPOPT_NOP << 24) |
u8 highbyte = TCPOPT_NOP;
if (unlikely(leftover_size == 1))
How can the above conditional be true?
highbyte = leftover_bytes >> 8;
*ptr++ = htonl((highbyte << 24) | (TCPOPT_WINDOW << 16) | (TCPOLEN_WINDOW << 8) | opts->ws);
leftover_bytes = NOP_LEFTOVER;
Why? isn't leftover_bytes already == NOP_LEFTOVER?
} if (unlikely(opts->num_sack_blocks)) { @@ -781,8 +790,7 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, tp->duplicate_sack : tp->selective_acks; int this_sack;
*ptr++ = htonl((TCPOPT_NOP << 24) |
(TCPOPT_NOP << 16) |
*ptr++ = htonl((leftover_bytes << 16) | (TCPOPT_SACK << 8) | (TCPOLEN_SACK_BASE + (opts->num_sack_blocks * TCPOLEN_SACK_PERBLOCK)));
@@ -794,6 +802,10 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, } tp->rx_opt.dsack = 0;
- } else if (unlikely(leftover_bytes != NOP_LEFTOVER)) {
I really feel like I'm missing some code chunk, but I don't see any possible value for leftover_bytes other than NOP_LEFTOVER
/P
On Tue, 29 Apr 2025, Paolo Abeni wrote:
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -709,6 +709,8 @@ static __be32 *process_tcp_ao_options(struct tcp_sock *tp, return ptr; } +#define NOP_LEFTOVER ((TCPOPT_NOP << 8) | TCPOPT_NOP)
/* Write previously computed TCP options to the packet.
- Beware: Something in the Internet is very sensitive to the ordering of
@@ -727,8 +729,10 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, struct tcp_out_options *opts, struct tcp_key *key) {
- u16 leftover_bytes = NOP_LEFTOVER; /* replace next NOPs if avail */ __be32 *ptr = (__be32 *)(th + 1); u16 options = opts->options; /* mungable copy */
- int leftover_size = 2;
if (tcp_key_is_md5(key)) { *ptr++ = htonl((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16) | @@ -763,17 +767,22 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, } if (unlikely(OPTION_SACK_ADVERTISE & options)) {
*ptr++ = htonl((TCPOPT_NOP << 24) |
(TCPOPT_NOP << 16) |
*ptr++ = htonl((leftover_bytes << 16) | (TCPOPT_SACK_PERM << 8) | TCPOLEN_SACK_PERM);
leftover_bytes = NOP_LEFTOVER;
Why? isn't leftover_bytes already == NOP_LEFTOVER?
} if (unlikely(OPTION_WSCALE & options)) {
*ptr++ = htonl((TCPOPT_NOP << 24) |
u8 highbyte = TCPOPT_NOP;
if (unlikely(leftover_size == 1))
How can the above conditional be true?
highbyte = leftover_bytes >> 8;
*ptr++ = htonl((highbyte << 24) | (TCPOPT_WINDOW << 16) | (TCPOLEN_WINDOW << 8) | opts->ws);
leftover_bytes = NOP_LEFTOVER;
Why? isn't leftover_bytes already == NOP_LEFTOVER?
} if (unlikely(opts->num_sack_blocks)) { @@ -781,8 +790,7 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, tp->duplicate_sack : tp->selective_acks; int this_sack;
*ptr++ = htonl((TCPOPT_NOP << 24) |
(TCPOPT_NOP << 16) |
*ptr++ = htonl((leftover_bytes << 16) | (TCPOPT_SACK << 8) | (TCPOLEN_SACK_BASE + (opts->num_sack_blocks * TCPOLEN_SACK_PERBLOCK)));
@@ -794,6 +802,10 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, } tp->rx_opt.dsack = 0;
- } else if (unlikely(leftover_bytes != NOP_LEFTOVER)) {
I really feel like I'm missing some code chunk, but I don't see any possible value for leftover_bytes other than NOP_LEFTOVER
Hi,
I split it this way to keep the generic part of the leftover mechanism in own patch separate from AccECN option change itself (as you did later discover). This is among the most convoluted parts in the entire AccECN patch series so it seemed wise to split it as small as I could. Having those non-sensical looking assigns here were to avoid code churn in the latter patch. The changelog could have stated that clearly though (my fault from years back).
-----Original Message----- From: Ilpo Järvinen ij@kernel.org Sent: Tuesday, May 6, 2025 1:10 AM To: Paolo Abeni pabeni@redhat.com Cc: Chia-Yu Chang (Nokia) chia-yu.chang@nokia-bell-labs.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Subject: Re: [PATCH v5 net-next 07/15] tcp: allow embedding leftover into option padding
CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.
On Tue, 29 Apr 2025, Paolo Abeni wrote:
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -709,6 +709,8 @@ static __be32 *process_tcp_ao_options(struct tcp_sock *tp, return ptr; }
+#define NOP_LEFTOVER ((TCPOPT_NOP << 8) | TCPOPT_NOP)
/* Write previously computed TCP options to the packet.
- Beware: Something in the Internet is very sensitive to the
ordering of @@ -727,8 +729,10 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, struct tcp_out_options *opts, struct tcp_key *key) {
u16 leftover_bytes = NOP_LEFTOVER; /* replace next NOPs if avail */ __be32 *ptr = (__be32 *)(th + 1); u16 options = opts->options; /* mungable copy */
int leftover_size = 2;
if (tcp_key_is_md5(key)) { *ptr++ = htonl((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16) |
@@ -763,17 +767,22 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, }
if (unlikely(OPTION_SACK_ADVERTISE & options)) {
*ptr++ = htonl((TCPOPT_NOP << 24) |
(TCPOPT_NOP << 16) |
*ptr++ = htonl((leftover_bytes << 16) | (TCPOPT_SACK_PERM << 8) | TCPOLEN_SACK_PERM);
leftover_bytes = NOP_LEFTOVER;
Why? isn't leftover_bytes already == NOP_LEFTOVER?
} if (unlikely(OPTION_WSCALE & options)) {
*ptr++ = htonl((TCPOPT_NOP << 24) |
u8 highbyte = TCPOPT_NOP;
if (unlikely(leftover_size == 1))
How can the above conditional be true?
highbyte = leftover_bytes >> 8;
*ptr++ = htonl((highbyte << 24) | (TCPOPT_WINDOW << 16) | (TCPOLEN_WINDOW << 8) | opts->ws);
leftover_bytes = NOP_LEFTOVER;
Why? isn't leftover_bytes already == NOP_LEFTOVER?
} if (unlikely(opts->num_sack_blocks)) { @@ -781,8 +790,7 @@
static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, tp->duplicate_sack : tp->selective_acks; int this_sack;
*ptr++ = htonl((TCPOPT_NOP << 24) |
(TCPOPT_NOP << 16) |
*ptr++ = htonl((leftover_bytes << 16) | (TCPOPT_SACK << 8) | (TCPOLEN_SACK_BASE + (opts->num_sack_blocks *
TCPOLEN_SACK_PERBLOCK))); @@ -794,6 +802,10 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, }
tp->rx_opt.dsack = 0;
- } else if (unlikely(leftover_bytes != NOP_LEFTOVER)) {
I really feel like I'm missing some code chunk, but I don't see any possible value for leftover_bytes other than NOP_LEFTOVER
Hi,
I split it this way to keep the generic part of the leftover mechanism in own patch separate from AccECN option change itself (as you did later discover). This is among the most convoluted parts in the entire AccECN patch series so it seemed wise to split it as small as I could. Having those non-sensical looking assigns here were to avoid code churn in the latter patch. The changelog could have stated that clearly though (my fault from years back).
-- i.
Hi Ilpo,
Thanks for further clarifications, and I will add more comments in this patch.
Chia-Yu
From: Ilpo Järvinen ij@kernel.org
1) Don't early return when sack doesn't fit. AccECN code will be placed after this fragment so no early returns please.
2) Make sure opts->num_sack_blocks is not left undefined. E.g., tcp_current_mss() does not memset its opts struct to zero. AccECN code checks if SACK option is present and may even alter it to make room for AccECN option when many SACK blocks are present. Thus, num_sack_blocks needs to be always valid.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- net/ipv4/tcp_output.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index d63f505a30e2..ad97bb9951fd 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1103,17 +1103,18 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb eff_sacks = tp->rx_opt.num_sacks + tp->rx_opt.dsack; if (unlikely(eff_sacks)) { const unsigned int remaining = MAX_TCP_OPTION_SPACE - size; - if (unlikely(remaining < TCPOLEN_SACK_BASE_ALIGNED + - TCPOLEN_SACK_PERBLOCK)) - return size; - - opts->num_sack_blocks = - min_t(unsigned int, eff_sacks, - (remaining - TCPOLEN_SACK_BASE_ALIGNED) / - TCPOLEN_SACK_PERBLOCK); - - size += TCPOLEN_SACK_BASE_ALIGNED + - opts->num_sack_blocks * TCPOLEN_SACK_PERBLOCK; + if (likely(remaining >= TCPOLEN_SACK_BASE_ALIGNED + + TCPOLEN_SACK_PERBLOCK)) { + opts->num_sack_blocks = + min_t(unsigned int, eff_sacks, + (remaining - TCPOLEN_SACK_BASE_ALIGNED) / + TCPOLEN_SACK_PERBLOCK); + + size += TCPOLEN_SACK_BASE_ALIGNED + + opts->num_sack_blocks * TCPOLEN_SACK_PERBLOCK; + } + } else { + opts->num_sack_blocks = 0; }
if (unlikely(BPF_SOCK_OPS_TEST_FLAG(tp,
From: Ilpo Järvinen ij@kernel.org
The Accurate ECN allows echoing back the sum of bytes for each IP ECN field value in the received packets using AccECN option. This change implements AccECN option tx & rx side processing without option send control related features that are added by a later change.
Based on specification: https://tools.ietf.org/id/draft-ietf-tcpm-accurate-ecn-28.txt (Some features of the spec will be added in the later changes rather than in this one).
A full-length AccECN option is always attempted but if it does not fit, the minimum length is selected based on the counters that have changed since the last update. The AccECN option (with 24-bit fields) often ends in odd sizes so the option write code tries to take advantage of some nop used to pad the other TCP options.
The delivered_ecn_bytes pairs with received_ecn_bytes similar to how delivered_ce pairs with received_ce. In contrast to ACE field, however, the option is not always available to update delivered_ecn_bytes. For ACK w/o AccECN option, the delivered bytes calculated based on the cumulative ACK+SACK information are assigned to one of the counters using an estimation heuristic to select the most likely ECN byte counter. Any estimation error is corrected when the next AccECN option arrives. It may occur that the heuristic gets too confused when there are enough different byte counter deltas between ACKs with the AccECN option in which case the heuristic just gives up on updating the counters for a while.
tcp_ecn_option sysctl can be used to select option sending mode for AccECN.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Neal Cardwell ncardwell@google.com Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- include/linux/tcp.h | 8 +- include/net/netns/ipv4.h | 1 + include/net/tcp.h | 13 +++ include/uapi/linux/tcp.h | 7 ++ net/ipv4/sysctl_net_ipv4.c | 9 ++ net/ipv4/tcp.c | 15 +++- net/ipv4/tcp_input.c | 171 +++++++++++++++++++++++++++++++++++-- net/ipv4/tcp_ipv4.c | 1 + net/ipv4/tcp_output.c | 129 ++++++++++++++++++++++++++++ 9 files changed, 346 insertions(+), 8 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 9cbfefd693e3..0e032d9631ac 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -122,8 +122,9 @@ struct tcp_options_received { smc_ok : 1, /* SMC seen on SYN packet */ snd_wscale : 4, /* Window scaling received from sender */ rcv_wscale : 4; /* Window scaling to send to receiver */ - u8 saw_unknown:1, /* Received unknown option */ - unused:7; + u8 accecn:6, /* AccECN index in header, 0=no options */ + saw_unknown:1, /* Received unknown option */ + unused:1; u8 num_sacks; /* Number of SACK blocks */ u16 user_mss; /* mss requested by user in ioctl */ u16 mss_clamp; /* Maximal mss, negotiated at connection setup */ @@ -302,10 +303,13 @@ struct tcp_sock { u32 snd_up; /* Urgent pointer */ u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */ + u32 delivered_ecn_bytes[3]; u32 received_ce; /* Like the above but for rcvd CE marked pkts */ u32 received_ecn_bytes[3]; u8 received_ce_pending:4, /* Not yet transmit cnt of received_ce */ unused2:4; + u8 accecn_minlen:2,/* Minimum length of AccECN option sent */ + est_ecnfield:2;/* ECN field for AccECN delivered estimates */ u32 app_limited; /* limited until "delivered" reaches this val */ u32 rcv_wnd; /* Current receiver window */ /* diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h index 6373e3f17da8..4569a9ef4fb8 100644 --- a/include/net/netns/ipv4.h +++ b/include/net/netns/ipv4.h @@ -148,6 +148,7 @@ struct netns_ipv4 { struct local_ports ip_local_ports;
u8 sysctl_tcp_ecn; + u8 sysctl_tcp_ecn_option; u8 sysctl_tcp_ecn_fallback;
u8 sysctl_ip_default_ttl; diff --git a/include/net/tcp.h b/include/net/tcp.h index 6ffa4ae085db..bfff2a9f95bf 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -204,6 +204,8 @@ static_assert((1 << ATO_BITS) > TCP_DELACK_MAX); #define TCPOPT_AO 29 /* Authentication Option (RFC5925) */ #define TCPOPT_MPTCP 30 /* Multipath TCP (RFC6824) */ #define TCPOPT_FASTOPEN 34 /* Fast open (RFC7413) */ +#define TCPOPT_ACCECN0 172 /* 0xAC: Accurate ECN Order 0 */ +#define TCPOPT_ACCECN1 174 /* 0xAE: Accurate ECN Order 1 */ #define TCPOPT_EXP 254 /* Experimental */ /* Magic number to be after the option value for sharing TCP * experimental options. See draft-ietf-tcpm-experimental-options-00.txt @@ -221,6 +223,7 @@ static_assert((1 << ATO_BITS) > TCP_DELACK_MAX); #define TCPOLEN_TIMESTAMP 10 #define TCPOLEN_MD5SIG 18 #define TCPOLEN_FASTOPEN_BASE 2 +#define TCPOLEN_ACCECN_BASE 2 #define TCPOLEN_EXP_FASTOPEN_BASE 4 #define TCPOLEN_EXP_SMC_BASE 6
@@ -234,6 +237,13 @@ static_assert((1 << ATO_BITS) > TCP_DELACK_MAX); #define TCPOLEN_MD5SIG_ALIGNED 20 #define TCPOLEN_MSS_ALIGNED 4 #define TCPOLEN_EXP_SMC_BASE_ALIGNED 8 +#define TCPOLEN_ACCECN_PERFIELD 3 + +/* Maximum number of byte counters in AccECN option + size */ +#define TCP_ACCECN_NUMFIELDS 3 +#define TCP_ACCECN_MAXSIZE (TCPOLEN_ACCECN_BASE + \ + TCPOLEN_ACCECN_PERFIELD * \ + TCP_ACCECN_NUMFIELDS)
/* tp->accecn_fail_mode */ #define TCP_ACCECN_ACE_FAIL_SEND BIT(0) @@ -1056,6 +1066,9 @@ static inline void tcp_accecn_init_counters(struct tcp_sock *tp) tp->received_ce = 0; tp->received_ce_pending = 0; __tcp_accecn_init_bytes_counters(tp->received_ecn_bytes); + __tcp_accecn_init_bytes_counters(tp->delivered_ecn_bytes); + tp->accecn_minlen = 0; + tp->est_ecnfield = 0; }
/* State flags for sacked in struct tcp_skb_cb */ diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h index dc8fdc80e16b..74ac8a5d2e00 100644 --- a/include/uapi/linux/tcp.h +++ b/include/uapi/linux/tcp.h @@ -298,6 +298,13 @@ struct tcp_info { __u32 tcpi_snd_wnd; /* peer's advertised receive window after * scaling (bytes) */ + __u32 tcpi_received_ce; /* # of CE marks received */ + __u32 tcpi_delivered_e1_bytes; /* Accurate ECN byte counters */ + __u32 tcpi_delivered_e0_bytes; + __u32 tcpi_delivered_ce_bytes; + __u32 tcpi_received_e1_bytes; + __u32 tcpi_received_e0_bytes; + __u32 tcpi_received_ce_bytes; __u32 tcpi_rcv_wnd; /* local advertised receive window after * scaling (bytes) */ diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c index 75ec1a599b52..1d7fd86ca7b9 100644 --- a/net/ipv4/sysctl_net_ipv4.c +++ b/net/ipv4/sysctl_net_ipv4.c @@ -731,6 +731,15 @@ static struct ctl_table ipv4_net_table[] = { .extra1 = SYSCTL_ZERO, .extra2 = &tcp_ecn_mode_max, }, + { + .procname = "tcp_ecn_option", + .data = &init_net.ipv4.sysctl_tcp_ecn_option, + .maxlen = sizeof(u8), + .mode = 0644, + .proc_handler = proc_dou8vec_minmax, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_TWO, + }, { .procname = "tcp_ecn_fallback", .data = &init_net.ipv4.sysctl_tcp_ecn_fallback, diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 1e21bdf43f23..89799f73c451 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -270,6 +270,7 @@
#include <net/icmp.h> #include <net/inet_common.h> +#include <net/inet_ecn.h> #include <net/tcp.h> #include <net/mptcp.h> #include <net/proto_memory.h> @@ -4109,6 +4110,9 @@ void tcp_get_info(struct sock *sk, struct tcp_info *info) { const struct tcp_sock *tp = tcp_sk(sk); /* iff sk_type == SOCK_STREAM */ const struct inet_connection_sock *icsk = inet_csk(sk); + const u8 ect1_idx = INET_ECN_ECT_1 - 1; + const u8 ect0_idx = INET_ECN_ECT_0 - 1; + const u8 ce_idx = INET_ECN_CE - 1; unsigned long rate; u32 now; u64 rate64; @@ -4227,6 +4231,14 @@ void tcp_get_info(struct sock *sk, struct tcp_info *info) info->tcpi_rehash = tp->plb_rehash + tp->timeout_rehash; info->tcpi_fastopen_client_fail = tp->fastopen_client_fail;
+ info->tcpi_received_ce = tp->received_ce; + info->tcpi_delivered_e1_bytes = tp->delivered_ecn_bytes[ect1_idx]; + info->tcpi_delivered_e0_bytes = tp->delivered_ecn_bytes[ect0_idx]; + info->tcpi_delivered_ce_bytes = tp->delivered_ecn_bytes[ce_idx]; + info->tcpi_received_e1_bytes = tp->received_ecn_bytes[ect1_idx]; + info->tcpi_received_e0_bytes = tp->received_ecn_bytes[ect0_idx]; + info->tcpi_received_ce_bytes = tp->received_ecn_bytes[ce_idx]; + info->tcpi_total_rto = tp->total_rto; info->tcpi_total_rto_recoveries = tp->total_rto_recoveries; info->tcpi_total_rto_time = tp->total_rto_time; @@ -5091,6 +5103,7 @@ static void __init tcp_struct_check(void) CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, snd_up); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, delivered); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, delivered_ce); + CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, delivered_ecn_bytes); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, received_ce); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, received_ecn_bytes); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, app_limited); @@ -5100,7 +5113,7 @@ static void __init tcp_struct_check(void) /* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */ - CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 109 + 7); + CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 122 + 6);
/* RX read-write hotpath cache lines */ CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_rx, bytes_received); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 5bd7fc9bcf66..41e45b9aff3f 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -70,6 +70,7 @@ #include <linux/sysctl.h> #include <linux/kernel.h> #include <linux/prefetch.h> +#include <linux/bitops.h> #include <net/dst.h> #include <net/tcp.h> #include <net/proto_memory.h> @@ -499,6 +500,144 @@ static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr return false; }
+/* Maps IP ECN field ECT/CE code point to AccECN option field number, given + * we are sending fields with Accurate ECN Order 1: ECT(1), CE, ECT(0). + */ +static u8 tcp_ecnfield_to_accecn_optfield(u8 ecnfield) +{ + switch (ecnfield) { + case INET_ECN_NOT_ECT: + return 0; /* AccECN does not send counts of NOT_ECT */ + case INET_ECN_ECT_1: + return 1; + case INET_ECN_CE: + return 2; + case INET_ECN_ECT_0: + return 3; + default: + WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield); + } + return 0; +} + +/* Maps IP ECN field ECT/CE code point to AccECN option field value offset. + * Some fields do not start from zero, to detect zeroing by middleboxes. + */ +static u32 tcp_accecn_field_init_offset(u8 ecnfield) +{ + switch (ecnfield) { + case INET_ECN_NOT_ECT: + return 0; /* AccECN does not send counts of NOT_ECT */ + case INET_ECN_ECT_1: + return TCP_ACCECN_E1B_INIT_OFFSET; + case INET_ECN_CE: + return TCP_ACCECN_CEB_INIT_OFFSET; + case INET_ECN_ECT_0: + return TCP_ACCECN_E0B_INIT_OFFSET; + default: + WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield); + } + return 0; +} + +/* Maps AccECN option field #nr to IP ECN field ECT/CE bits */ +static unsigned int tcp_accecn_optfield_to_ecnfield(unsigned int optfield, + bool order) +{ + u8 tmp; + + optfield = order ? 2 - optfield : optfield; + tmp = optfield + 2; + + return (tmp + (tmp >> 2)) & INET_ECN_MASK; +} + +/* Handles AccECN option ECT and CE 24-bit byte counters update into + * the u32 value in tcp_sock. As we're processing TCP options, it is + * safe to access from - 1. + */ +static s32 tcp_update_ecn_bytes(u32 *cnt, const char *from, u32 init_offset) +{ + u32 truncated = (get_unaligned_be32(from - 1) - init_offset) & + 0xFFFFFFU; + u32 delta = (truncated - *cnt) & 0xFFFFFFU; + + /* If delta has the highest bit set (24th bit) indicating + * negative, sign extend to correct an estimation using + * sign_extend32(delta, 24 - 1) + */ + delta = sign_extend32(delta, 23); + *cnt += delta; + return (s32)delta; +} + +/* Returns true if the byte counters can be used */ +static bool tcp_accecn_process_option(struct tcp_sock *tp, + const struct sk_buff *skb, + u32 delivered_bytes, int flag) +{ + u8 estimate_ecnfield = tp->est_ecnfield; + bool ambiguous_ecn_bytes_incr = false; + bool first_changed = false; + unsigned int optlen; + unsigned char *ptr; + bool order1, res; + unsigned int i; + + if (!(flag & FLAG_SLOWPATH) || !tp->rx_opt.accecn) { + if (estimate_ecnfield) { + u8 ecnfield = estimate_ecnfield - 1; + + tp->delivered_ecn_bytes[ecnfield] += delivered_bytes; + return true; + } + return false; + } + + ptr = skb_transport_header(skb) + tp->rx_opt.accecn; + optlen = ptr[1] - 2; + WARN_ON_ONCE(ptr[0] != TCPOPT_ACCECN0 && ptr[0] != TCPOPT_ACCECN1); + order1 = (ptr[0] == TCPOPT_ACCECN1); + ptr += 2; + + res = !!estimate_ecnfield; + for (i = 0; i < 3; i++) { + if (optlen >= TCPOLEN_ACCECN_PERFIELD) { + u32 init_offset; + u8 ecnfield; + s32 delta; + u32 *cnt; + + ecnfield = tcp_accecn_optfield_to_ecnfield(i, order1); + init_offset = tcp_accecn_field_init_offset(ecnfield); + cnt = &tp->delivered_ecn_bytes[ecnfield - 1]; + delta = tcp_update_ecn_bytes(cnt, ptr, init_offset); + if (delta) { + if (delta < 0) { + res = false; + ambiguous_ecn_bytes_incr = true; + } + if (ecnfield != estimate_ecnfield) { + if (!first_changed) { + tp->est_ecnfield = ecnfield; + first_changed = true; + } else { + res = false; + ambiguous_ecn_bytes_incr = true; + } + } + } + + optlen -= TCPOLEN_ACCECN_PERFIELD; + ptr += TCPOLEN_ACCECN_PERFIELD; + } + } + if (ambiguous_ecn_bytes_incr) + tp->est_ecnfield = 0; + + return res; +} + static void tcp_count_delivered_ce(struct tcp_sock *tp, u32 ecn_count) { tp->delivered_ce += ecn_count; @@ -515,7 +654,8 @@ static void tcp_count_delivered(struct tcp_sock *tp, u32 delivered,
/* Returns the ECN CE delta */ static u32 __tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, - u32 delivered_pkts, int flag) + u32 delivered_pkts, u32 delivered_bytes, + int flag) { const struct tcphdr *th = tcp_hdr(skb); struct tcp_sock *tp = tcp_sk(sk); @@ -526,6 +666,8 @@ static u32 __tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, if (!(flag & (FLAG_FORWARD_PROGRESS | FLAG_TS_PROGRESS))) return 0;
+ tcp_accecn_process_option(tp, skb, delivered_bytes, flag); + if (!(flag & FLAG_SLOWPATH)) { /* AccECN counter might overflow on large ACKs */ if (delivered_pkts <= TCP_ACCECN_CEP_ACE_MASK) @@ -551,12 +693,14 @@ static u32 __tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, }
static u32 tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, - u32 delivered_pkts, int *flag) + u32 delivered_pkts, u32 delivered_bytes, + int *flag) { struct tcp_sock *tp = tcp_sk(sk); u32 delta;
- delta = __tcp_accecn_process(sk, skb, delivered_pkts, *flag); + delta = __tcp_accecn_process(sk, skb, delivered_pkts, + delivered_bytes, *flag); if (delta > 0) { tcp_count_delivered_ce(tp, delta); *flag |= FLAG_ECE; @@ -4212,6 +4356,7 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) if (tcp_ecn_mode_accecn(tp)) ecn_count = tcp_accecn_process(sk, skb, tp->delivered - delivered, + sack_state.delivered_bytes, &flag);
tcp_in_ack_event(sk, flag); @@ -4251,6 +4396,7 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) if (tcp_ecn_mode_accecn(tp)) ecn_count = tcp_accecn_process(sk, skb, tp->delivered - delivered, + sack_state.delivered_bytes, &flag); tcp_in_ack_event(sk, flag); /* If data was DSACKed, see if we can undo a cwnd reduction. */ @@ -4378,6 +4524,7 @@ void tcp_parse_options(const struct net *net,
ptr = (const unsigned char *)(th + 1); opt_rx->saw_tstamp = 0; + opt_rx->accecn = 0; opt_rx->saw_unknown = 0;
while (length > 0) { @@ -4469,6 +4616,12 @@ void tcp_parse_options(const struct net *net, ptr, th->syn, foc, false); break;
+ case TCPOPT_ACCECN0: + case TCPOPT_ACCECN1: + /* Save offset of AccECN option in TCP header */ + opt_rx->accecn = (ptr - 2) - (__u8 *)th; + break; + case TCPOPT_EXP: /* Fast Open option shares code 254 using a * 16 bits magic number. @@ -4529,11 +4682,14 @@ static bool tcp_fast_parse_options(const struct net *net, */ if (th->doff == (sizeof(*th) / 4)) { tp->rx_opt.saw_tstamp = 0; + tp->rx_opt.accecn = 0; return false; } else if (tp->rx_opt.tstamp_ok && th->doff == ((sizeof(*th) + TCPOLEN_TSTAMP_ALIGNED) / 4)) { - if (tcp_parse_aligned_timestamp(tp, th)) + if (tcp_parse_aligned_timestamp(tp, th)) { + tp->rx_opt.accecn = 0; return true; + } }
tcp_parse_options(net, skb, &tp->rx_opt, 1, NULL); @@ -6133,8 +6289,12 @@ void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb, tp->received_ce_pending = min(tp->received_ce_pending + pcount, 0xfU);
- if (payload_len > 0) + if (payload_len > 0) { + u8 minlen = tcp_ecnfield_to_accecn_optfield(ecnfield); tp->received_ecn_bytes[ecnfield - 1] += payload_len; + tp->accecn_minlen = max_t(u8, tp->accecn_minlen, + minlen); + } } }
@@ -6358,6 +6518,7 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb) */
tp->rx_opt.saw_tstamp = 0; + tp->rx_opt.accecn = 0;
/* pred_flags is 0xS?10 << 16 + snd_wnd * if header_prediction is to be made diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 5c5d4b94b59c..3f3e285fc973 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -3450,6 +3450,7 @@ static void __net_init tcp_set_hashinfo(struct net *net) static int __net_init tcp_sk_init(struct net *net) { net->ipv4.sysctl_tcp_ecn = 2; + net->ipv4.sysctl_tcp_ecn_option = 2; net->ipv4.sysctl_tcp_ecn_fallback = 1;
net->ipv4.sysctl_tcp_base_mss = TCP_BASE_MSS; diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index ad97bb9951fd..a36de6c539da 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -491,6 +491,7 @@ static inline bool tcp_urg_mode(const struct tcp_sock *tp) #define OPTION_SMC BIT(9) #define OPTION_MPTCP BIT(10) #define OPTION_AO BIT(11) +#define OPTION_ACCECN BIT(12)
static void smc_options_write(__be32 *ptr, u16 *options) { @@ -512,12 +513,14 @@ struct tcp_out_options { u16 mss; /* 0 to disable */ u8 ws; /* window scale, 0 to disable */ u8 num_sack_blocks; /* number of SACK blocks to include */ + u8 num_accecn_fields; /* number of AccECN fields needed */ u8 hash_size; /* bytes in hash_location */ u8 bpf_opt_len; /* length of BPF hdr option */ __u8 *hash_location; /* temporary pointer, overloaded */ __u32 tsval, tsecr; /* need to include OPTION_TS */ struct tcp_fastopen_cookie *fastopen_cookie; /* Fast open cookie */ struct mptcp_out_options mptcp; + u32 *ecn_bytes; /* AccECN ECT/CE byte counters */ };
static void mptcp_options_write(struct tcphdr *th, __be32 *ptr, @@ -766,6 +769,47 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, *ptr++ = htonl(opts->tsecr); }
+ if (OPTION_ACCECN & options) { + const u8 ect0_idx = INET_ECN_ECT_0 - 1; + const u8 ect1_idx = INET_ECN_ECT_1 - 1; + const u8 ce_idx = INET_ECN_CE - 1; + u32 e0b; + u32 e1b; + u32 ceb; + u8 len; + + e0b = opts->ecn_bytes[ect0_idx] + TCP_ACCECN_E0B_INIT_OFFSET; + e1b = opts->ecn_bytes[ect1_idx] + TCP_ACCECN_E1B_INIT_OFFSET; + ceb = opts->ecn_bytes[ce_idx] + TCP_ACCECN_CEB_INIT_OFFSET; + len = TCPOLEN_ACCECN_BASE + + opts->num_accecn_fields * TCPOLEN_ACCECN_PERFIELD; + + if (opts->num_accecn_fields == 2) { + *ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) | + ((e1b >> 8) & 0xffff)); + *ptr++ = htonl(((e1b & 0xff) << 24) | + (ceb & 0xffffff)); + } else if (opts->num_accecn_fields == 1) { + *ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) | + ((e1b >> 8) & 0xffff)); + leftover_bytes = ((e1b & 0xff) << 8) | + TCPOPT_NOP; + leftover_size = 1; + } else if (opts->num_accecn_fields == 0) { + leftover_bytes = (TCPOPT_ACCECN1 << 8) | len; + leftover_size = 2; + } else if (opts->num_accecn_fields == 3) { + *ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) | + ((e1b >> 8) & 0xffff)); + *ptr++ = htonl(((e1b & 0xff) << 24) | + (ceb & 0xffffff)); + *ptr++ = htonl(((e0b & 0xffffff) << 8) | + TCPOPT_NOP); + } + if (tp) + tp->accecn_minlen = 0; + } + if (unlikely(OPTION_SACK_ADVERTISE & options)) { *ptr++ = htonl((leftover_bytes << 16) | (TCPOPT_SACK_PERM << 8) | @@ -886,6 +930,60 @@ static void mptcp_set_option_cond(const struct request_sock *req, } }
+/* Initial values for AccECN option, ordered is based on ECN field bits + * similar to received_ecn_bytes. Used for SYN/ACK AccECN option. + */ +static u32 synack_ecn_bytes[3] = { 0, 0, 0 }; + +static u32 tcp_synack_options_combine_saving(struct tcp_out_options *opts) +{ + /* How much there's room for combining with the alignment padding? */ + if ((opts->options & (OPTION_SACK_ADVERTISE | OPTION_TS)) == + OPTION_SACK_ADVERTISE) + return 2; + else if (opts->options & OPTION_WSCALE) + return 1; + return 0; +} + +/* Calculates how long AccECN option will fit to @remaining option space. + * + * AccECN option can sometimes replace NOPs used for alignment of other + * TCP options (up to @max_combine_saving available). + * + * Only solutions with at least @required AccECN fields are accepted. + * + * Returns: The size of the AccECN option excluding space repurposed from + * the alignment of the other options. + */ +static int tcp_options_fit_accecn(struct tcp_out_options *opts, int required, + int remaining, int max_combine_saving) +{ + int size = TCP_ACCECN_MAXSIZE; + + opts->num_accecn_fields = TCP_ACCECN_NUMFIELDS; + + while (opts->num_accecn_fields >= required) { + int leftover_size = size & 0x3; + /* Pad to dword if cannot combine */ + if (leftover_size > max_combine_saving) + leftover_size = -((4 - leftover_size) & 0x3); + + if (remaining >= size - leftover_size) { + size -= leftover_size; + break; + } + + opts->num_accecn_fields--; + size -= TCPOLEN_ACCECN_PERFIELD; + } + if (opts->num_accecn_fields < required) + return 0; + + opts->options |= OPTION_ACCECN; + return size; +} + /* Compute TCP options for SYN packets. This is not the final * network wire format yet. */ @@ -968,6 +1066,17 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb, } }
+ /* Simultaneous open SYN/ACK needs AccECN option but not SYN */ + if (unlikely((TCP_SKB_CB(skb)->tcp_flags & TCPHDR_ACK) && + tcp_ecn_mode_accecn(tp) && + sock_net(sk)->ipv4.sysctl_tcp_ecn_option && + remaining >= TCPOLEN_ACCECN_BASE)) { + u32 saving = tcp_synack_options_combine_saving(opts); + + opts->ecn_bytes = synack_ecn_bytes; + remaining -= tcp_options_fit_accecn(opts, 0, remaining, saving); + } + bpf_skops_hdr_opt_len(sk, skb, NULL, NULL, 0, opts, &remaining);
return MAX_TCP_OPTION_SPACE - remaining; @@ -985,6 +1094,7 @@ static unsigned int tcp_synack_options(const struct sock *sk, { struct inet_request_sock *ireq = inet_rsk(req); unsigned int remaining = MAX_TCP_OPTION_SPACE; + struct tcp_request_sock *treq = tcp_rsk(req);
if (tcp_key_is_md5(key)) { opts->options |= OPTION_MD5; @@ -1047,6 +1157,14 @@ static unsigned int tcp_synack_options(const struct sock *sk,
smc_set_option_cond(tcp_sk(sk), ireq, opts, &remaining);
+ if (treq->accecn_ok && sock_net(sk)->ipv4.sysctl_tcp_ecn_option && + remaining >= TCPOLEN_ACCECN_BASE) { + u32 saving = tcp_synack_options_combine_saving(opts); + + opts->ecn_bytes = synack_ecn_bytes; + remaining -= tcp_options_fit_accecn(opts, 0, remaining, saving); + } + bpf_skops_hdr_opt_len((struct sock *)sk, skb, req, syn_skb, synack_type, opts, &remaining);
@@ -1117,6 +1235,17 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb opts->num_sack_blocks = 0; }
+ if (tcp_ecn_mode_accecn(tp) && + sock_net(sk)->ipv4.sysctl_tcp_ecn_option) { + int saving = opts->num_sack_blocks > 0 ? 2 : 0; + int remaining = MAX_TCP_OPTION_SPACE - size; + + opts->ecn_bytes = tp->received_ecn_bytes; + size += tcp_options_fit_accecn(opts, tp->accecn_minlen, + remaining, + saving); + } + if (unlikely(BPF_SOCK_OPS_TEST_FLAG(tp, BPF_SOCK_OPS_WRITE_HDR_OPT_CB_FLAG))) { unsigned int remaining = MAX_TCP_OPTION_SPACE - size;
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -302,10 +303,13 @@ struct tcp_sock { u32 snd_up; /* Urgent pointer */ u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */
- u32 delivered_ecn_bytes[3];
This new fields do not belong to this cacheline group. I'm unsure they belong to fast-path at all. Also u32 will wrap-around very soon.
[...]
diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h index dc8fdc80e16b..74ac8a5d2e00 100644 --- a/include/uapi/linux/tcp.h +++ b/include/uapi/linux/tcp.h @@ -298,6 +298,13 @@ struct tcp_info { __u32 tcpi_snd_wnd; /* peer's advertised receive window after * scaling (bytes) */
- __u32 tcpi_received_ce; /* # of CE marks received */
- __u32 tcpi_delivered_e1_bytes; /* Accurate ECN byte counters */
- __u32 tcpi_delivered_e0_bytes;
- __u32 tcpi_delivered_ce_bytes;
- __u32 tcpi_received_e1_bytes;
- __u32 tcpi_received_e0_bytes;
- __u32 tcpi_received_ce_bytes;
This will break uAPI: new fields must be addded at the end, or must fill existing holes. Also u32 set in stone in uAPI for a byte counter looks way too small.
@@ -5100,7 +5113,7 @@ static void __init tcp_struct_check(void) /* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */
- CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 109 + 7);
- CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 122 + 6);
The above means an additional cacheline in fast-path WRT the current status. IMHO should be avoided.
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 5bd7fc9bcf66..41e45b9aff3f 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -70,6 +70,7 @@ #include <linux/sysctl.h> #include <linux/kernel.h> #include <linux/prefetch.h> +#include <linux/bitops.h> #include <net/dst.h> #include <net/tcp.h> #include <net/proto_memory.h> @@ -499,6 +500,144 @@ static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr return false; } +/* Maps IP ECN field ECT/CE code point to AccECN option field number, given
- we are sending fields with Accurate ECN Order 1: ECT(1), CE, ECT(0).
- */
+static u8 tcp_ecnfield_to_accecn_optfield(u8 ecnfield) +{
- switch (ecnfield) {
- case INET_ECN_NOT_ECT:
return 0; /* AccECN does not send counts of NOT_ECT */
- case INET_ECN_ECT_1:
return 1;
- case INET_ECN_CE:
return 2;
- case INET_ECN_ECT_0:
return 3;
- default:
WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield);
No WARN_ONCE() above please: either the 'ecnfield' data is masked vs INET_ECN_MASK and the WARN_ONCE should not be possible or a remote sender can deterministically trigger a WARN() which nowadays will in turn raise a CVE...
[...]
+static u32 tcp_accecn_field_init_offset(u8 ecnfield) +{
- switch (ecnfield) {
- case INET_ECN_NOT_ECT:
return 0; /* AccECN does not send counts of NOT_ECT */
- case INET_ECN_ECT_1:
return TCP_ACCECN_E1B_INIT_OFFSET;
- case INET_ECN_CE:
return TCP_ACCECN_CEB_INIT_OFFSET;
- case INET_ECN_ECT_0:
return TCP_ACCECN_E0B_INIT_OFFSET;
- default:
WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield);
Same as above.
- }
- return 0;
+}
+/* Maps AccECN option field #nr to IP ECN field ECT/CE bits */ +static unsigned int tcp_accecn_optfield_to_ecnfield(unsigned int optfield,
bool order)
+{
- u8 tmp;
- optfield = order ? 2 - optfield : optfield;
- tmp = optfield + 2;
- return (tmp + (tmp >> 2)) & INET_ECN_MASK;
+}
+/* Handles AccECN option ECT and CE 24-bit byte counters update into
- the u32 value in tcp_sock. As we're processing TCP options, it is
- safe to access from - 1.
- */
+static s32 tcp_update_ecn_bytes(u32 *cnt, const char *from, u32 init_offset) +{
- u32 truncated = (get_unaligned_be32(from - 1) - init_offset) &
0xFFFFFFU;
- u32 delta = (truncated - *cnt) & 0xFFFFFFU;
- /* If delta has the highest bit set (24th bit) indicating
* negative, sign extend to correct an estimation using
* sign_extend32(delta, 24 - 1)
*/
- delta = sign_extend32(delta, 23);
- *cnt += delta;
- return (s32)delta;
+}
+/* Returns true if the byte counters can be used */ +static bool tcp_accecn_process_option(struct tcp_sock *tp,
const struct sk_buff *skb,
u32 delivered_bytes, int flag)
+{
- u8 estimate_ecnfield = tp->est_ecnfield;
- bool ambiguous_ecn_bytes_incr = false;
- bool first_changed = false;
- unsigned int optlen;
- unsigned char *ptr;
- bool order1, res;
- unsigned int i;
- if (!(flag & FLAG_SLOWPATH) || !tp->rx_opt.accecn) {
if (estimate_ecnfield) {
u8 ecnfield = estimate_ecnfield - 1;
tp->delivered_ecn_bytes[ecnfield] += delivered_bytes;
return true;
}
return false;
- }
- ptr = skb_transport_header(skb) + tp->rx_opt.accecn;
- optlen = ptr[1] - 2;
This assumes optlen is greater then 2, but I don't see the relevant check. Are tcp options present at all?
- WARN_ON_ONCE(ptr[0] != TCPOPT_ACCECN0 && ptr[0] != TCPOPT_ACCECN1);
Please, don't warn for arbitrary wrong data sent from the peer.
- order1 = (ptr[0] == TCPOPT_ACCECN1);
- ptr += 2;
- res = !!estimate_ecnfield;
- for (i = 0; i < 3; i++) {
if (optlen >= TCPOLEN_ACCECN_PERFIELD) {
u32 init_offset;
u8 ecnfield;
s32 delta;
u32 *cnt;
ecnfield = tcp_accecn_optfield_to_ecnfield(i, order1);
init_offset = tcp_accecn_field_init_offset(ecnfield);
cnt = &tp->delivered_ecn_bytes[ecnfield - 1];
delta = tcp_update_ecn_bytes(cnt, ptr, init_offset);
if (delta) {
if (delta < 0) {
res = false;
ambiguous_ecn_bytes_incr = true;
}
if (ecnfield != estimate_ecnfield) {
if (!first_changed) {
tp->est_ecnfield = ecnfield;
first_changed = true;
} else {
res = false;
ambiguous_ecn_bytes_incr = true;
}
At least 2 indentation levels above the maximum readable.
[...]
@@ -4378,6 +4524,7 @@ void tcp_parse_options(const struct net *net, ptr = (const unsigned char *)(th + 1); opt_rx->saw_tstamp = 0;
- opt_rx->accecn = 0; opt_rx->saw_unknown = 0;
It would be good to be able to zero both 'accecn' and 'saw_unknown' with a single statement.
[...]
@@ -766,6 +769,47 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, *ptr++ = htonl(opts->tsecr); }
- if (OPTION_ACCECN & options) {
const u8 ect0_idx = INET_ECN_ECT_0 - 1;
const u8 ect1_idx = INET_ECN_ECT_1 - 1;
const u8 ce_idx = INET_ECN_CE - 1;
u32 e0b;
u32 e1b;
u32 ceb;
u8 len;
e0b = opts->ecn_bytes[ect0_idx] + TCP_ACCECN_E0B_INIT_OFFSET;
e1b = opts->ecn_bytes[ect1_idx] + TCP_ACCECN_E1B_INIT_OFFSET;
ceb = opts->ecn_bytes[ce_idx] + TCP_ACCECN_CEB_INIT_OFFSET;
len = TCPOLEN_ACCECN_BASE +
opts->num_accecn_fields * TCPOLEN_ACCECN_PERFIELD;
if (opts->num_accecn_fields == 2) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
*ptr++ = htonl(((e1b & 0xff) << 24) |
(ceb & 0xffffff));
} else if (opts->num_accecn_fields == 1) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
leftover_bytes = ((e1b & 0xff) << 8) |
TCPOPT_NOP;
leftover_size = 1;
} else if (opts->num_accecn_fields == 0) {
leftover_bytes = (TCPOPT_ACCECN1 << 8) | len;
leftover_size = 2;
} else if (opts->num_accecn_fields == 3) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
*ptr++ = htonl(((e1b & 0xff) << 24) |
(ceb & 0xffffff));
*ptr++ = htonl(((e0b & 0xffffff) << 8) |
TCPOPT_NOP);
The above chunck and the contents of patch 7 must be in the same patch. This split makes the review even harder.
[...]
@@ -1117,6 +1235,17 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb opts->num_sack_blocks = 0; }
- if (tcp_ecn_mode_accecn(tp) &&
sock_net(sk)->ipv4.sysctl_tcp_ecn_option) {
int saving = opts->num_sack_blocks > 0 ? 2 : 0;
int remaining = MAX_TCP_OPTION_SPACE - size;
AFACS the above means tcp_options_fit_accecn() must clear any already set options, but apparently it does not do so. Have you tested with something adding largish options like mptcp?
/P
-----Original Message----- From: Paolo Abeni pabeni@redhat.com Sent: Tuesday, April 29, 2025 1:56 PM To: Chia-Yu Chang (Nokia) chia-yu.chang@nokia-bell-labs.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ij@kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Subject: Re: [PATCH v5 net-next 09/15] tcp: accecn: AccECN option
CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -302,10 +303,13 @@ struct tcp_sock { u32 snd_up; /* Urgent pointer */ u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */
u32 delivered_ecn_bytes[3];
This new fields do not belong to this cacheline group. I'm unsure they belong to fast-path at all. Also u32 will wrap-around very soon.
Hi Paolo,
Thanks for the feedback.
Could you help to advise then which cacheline group ie belongs to? If there are some tools can be shared will be appreciated.
[...]
diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h index dc8fdc80e16b..74ac8a5d2e00 100644 --- a/include/uapi/linux/tcp.h +++ b/include/uapi/linux/tcp.h @@ -298,6 +298,13 @@ struct tcp_info { __u32 tcpi_snd_wnd; /* peer's advertised receive window after * scaling (bytes) */
__u32 tcpi_received_ce; /* # of CE marks received */
__u32 tcpi_delivered_e1_bytes; /* Accurate ECN byte counters */
__u32 tcpi_delivered_e0_bytes;
__u32 tcpi_delivered_ce_bytes;
__u32 tcpi_received_e1_bytes;
__u32 tcpi_received_e0_bytes;
__u32 tcpi_received_ce_bytes;
This will break uAPI: new fields must be addded at the end, or must fill existing holes. Also u32 set in stone in uAPI for a byte counter looks way too small.
I will move at the end or fill existing holes using pahole. Indeed u32 is not big, but based on the algorithms in A.2.1 and A.1. of AccECN draft, the byte counter greater than 24b shall be fine. And this is also verfied using TCP Prague.
@@ -5100,7 +5113,7 @@ static void __init tcp_struct_check(void) /* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */
CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 109 + 7);
CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock,
- tcp_sock_write_txrx, 122 + 6);
The above means an additional cacheline in fast-path WRT the current status. IMHO should be avoided.
OK, I did this to avoid the line width warning of patchcheck, but will change it back.
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 5bd7fc9bcf66..41e45b9aff3f 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -70,6 +70,7 @@ #include <linux/sysctl.h> #include <linux/kernel.h> #include <linux/prefetch.h> +#include <linux/bitops.h> #include <net/dst.h> #include <net/tcp.h> #include <net/proto_memory.h> @@ -499,6 +500,144 @@ static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr return false; }
+/* Maps IP ECN field ECT/CE code point to AccECN option field number, +given
- we are sending fields with Accurate ECN Order 1: ECT(1), CE, ECT(0).
- */
+static u8 tcp_ecnfield_to_accecn_optfield(u8 ecnfield) {
switch (ecnfield) {
case INET_ECN_NOT_ECT:
return 0; /* AccECN does not send counts of NOT_ECT */
case INET_ECN_ECT_1:
return 1;
case INET_ECN_CE:
return 2;
case INET_ECN_ECT_0:
return 3;
default:
WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield);
No WARN_ONCE() above please: either the 'ecnfield' data is masked vs INET_ECN_MASK and the WARN_ONCE should not be possible or a remote sender can deterministically trigger a WARN() which nowadays will in turn raise a CVE...
Sure, I will add the mask here.
[...]
+static u32 tcp_accecn_field_init_offset(u8 ecnfield) {
switch (ecnfield) {
case INET_ECN_NOT_ECT:
return 0; /* AccECN does not send counts of NOT_ECT */
case INET_ECN_ECT_1:
return TCP_ACCECN_E1B_INIT_OFFSET;
case INET_ECN_CE:
return TCP_ACCECN_CEB_INIT_OFFSET;
case INET_ECN_ECT_0:
return TCP_ACCECN_E0B_INIT_OFFSET;
default:
WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield);
Same as above.
}
return 0;
+}
+/* Maps AccECN option field #nr to IP ECN field ECT/CE bits */ static +unsigned int tcp_accecn_optfield_to_ecnfield(unsigned int optfield,
bool order) {
u8 tmp;
optfield = order ? 2 - optfield : optfield;
tmp = optfield + 2;
return (tmp + (tmp >> 2)) & INET_ECN_MASK; }
+/* Handles AccECN option ECT and CE 24-bit byte counters update into
- the u32 value in tcp_sock. As we're processing TCP options, it is
- safe to access from - 1.
- */
+static s32 tcp_update_ecn_bytes(u32 *cnt, const char *from, u32 +init_offset) {
u32 truncated = (get_unaligned_be32(from - 1) - init_offset) &
0xFFFFFFU;
u32 delta = (truncated - *cnt) & 0xFFFFFFU;
/* If delta has the highest bit set (24th bit) indicating
* negative, sign extend to correct an estimation using
* sign_extend32(delta, 24 - 1)
*/
delta = sign_extend32(delta, 23);
*cnt += delta;
return (s32)delta;
+}
+/* Returns true if the byte counters can be used */ static bool +tcp_accecn_process_option(struct tcp_sock *tp,
const struct sk_buff *skb,
u32 delivered_bytes, int flag) {
u8 estimate_ecnfield = tp->est_ecnfield;
bool ambiguous_ecn_bytes_incr = false;
bool first_changed = false;
unsigned int optlen;
unsigned char *ptr;
bool order1, res;
unsigned int i;
if (!(flag & FLAG_SLOWPATH) || !tp->rx_opt.accecn) {
if (estimate_ecnfield) {
u8 ecnfield = estimate_ecnfield - 1;
tp->delivered_ecn_bytes[ecnfield] += delivered_bytes;
return true;
}
return false;
}
ptr = skb_transport_header(skb) + tp->rx_opt.accecn;
optlen = ptr[1] - 2;
This assumes optlen is greater then 2, but I don't see the relevant check. Are tcp options present at all?
This function is executed only when AccECN mode is negotiated. And the above condition "if (!(flag & FLAG_SLOWPATH) || !tp->rx_opt.accecn)" covers the case in which AccECN option is not present. So, I would think this is safe; please let me know if you think otherwise.
WARN_ON_ONCE(ptr[0] != TCPOPT_ACCECN0 && ptr[0] !=
- TCPOPT_ACCECN1);
Please, don't warn for arbitrary wrong data sent from the peer.
Sure, will remove.
order1 = (ptr[0] == TCPOPT_ACCECN1);
ptr += 2;
res = !!estimate_ecnfield;
for (i = 0; i < 3; i++) {
if (optlen >= TCPOLEN_ACCECN_PERFIELD) {
u32 init_offset;
u8 ecnfield;
s32 delta;
u32 *cnt;
ecnfield = tcp_accecn_optfield_to_ecnfield(i, order1);
init_offset = tcp_accecn_field_init_offset(ecnfield);
cnt = &tp->delivered_ecn_bytes[ecnfield - 1];
delta = tcp_update_ecn_bytes(cnt, ptr, init_offset);
if (delta) {
if (delta < 0) {
res = false;
ambiguous_ecn_bytes_incr = true;
}
if (ecnfield != estimate_ecnfield) {
if (!first_changed) {
tp->est_ecnfield = ecnfield;
first_changed = true;
} else {
res = false;
ambiguous_ecn_bytes_incr = true;
}
At least 2 indentation levels above the maximum readable.
OK, let me think how to simplify it in next version.
[...]
@@ -4378,6 +4524,7 @@ void tcp_parse_options(const struct net *net,
ptr = (const unsigned char *)(th + 1); opt_rx->saw_tstamp = 0;
opt_rx->accecn = 0; opt_rx->saw_unknown = 0;
It would be good to be able to zero both 'accecn' and 'saw_unknown' with a single statement.
ok, will do.
[...]
@@ -766,6 +769,47 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, *ptr++ = htonl(opts->tsecr); }
if (OPTION_ACCECN & options) {
const u8 ect0_idx = INET_ECN_ECT_0 - 1;
const u8 ect1_idx = INET_ECN_ECT_1 - 1;
const u8 ce_idx = INET_ECN_CE - 1;
u32 e0b;
u32 e1b;
u32 ceb;
u8 len;
e0b = opts->ecn_bytes[ect0_idx] + TCP_ACCECN_E0B_INIT_OFFSET;
e1b = opts->ecn_bytes[ect1_idx] + TCP_ACCECN_E1B_INIT_OFFSET;
ceb = opts->ecn_bytes[ce_idx] + TCP_ACCECN_CEB_INIT_OFFSET;
len = TCPOLEN_ACCECN_BASE +
opts->num_accecn_fields * TCPOLEN_ACCECN_PERFIELD;
if (opts->num_accecn_fields == 2) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
*ptr++ = htonl(((e1b & 0xff) << 24) |
(ceb & 0xffffff));
} else if (opts->num_accecn_fields == 1) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
leftover_bytes = ((e1b & 0xff) << 8) |
TCPOPT_NOP;
leftover_size = 1;
} else if (opts->num_accecn_fields == 0) {
leftover_bytes = (TCPOPT_ACCECN1 << 8) | len;
leftover_size = 2;
} else if (opts->num_accecn_fields == 3) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
*ptr++ = htonl(((e1b & 0xff) << 24) |
(ceb & 0xffffff));
*ptr++ = htonl(((e0b & 0xffffff) << 8) |
TCPOPT_NOP);
The above chunck and the contents of patch 7 must be in the same patch. This split makes the review even harder.
Thanks for feedback, I will mrege these 2 patches.
[...]
@@ -1117,6 +1235,17 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb opts->num_sack_blocks = 0; }
if (tcp_ecn_mode_accecn(tp) &&
sock_net(sk)->ipv4.sysctl_tcp_ecn_option) {
int saving = opts->num_sack_blocks > 0 ? 2 : 0;
int remaining = MAX_TCP_OPTION_SPACE - size;
AFACS the above means tcp_options_fit_accecn() must clear any already set options, but apparently it does not do so. Have you tested with something adding largish options like mptcp?
I see this part is NOT to clear already set option, but to calculate how long AccECN option will fit to remaining option space.
/P
On Tue, 29 Apr 2025, Paolo Abeni wrote:
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -302,10 +303,13 @@ struct tcp_sock { u32 snd_up; /* Urgent pointer */ u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */
- u32 delivered_ecn_bytes[3];
This new fields do not belong to this cacheline group. I'm unsure they belong to fast-path at all. Also u32 will wrap-around very soon.
[...]
diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h index dc8fdc80e16b..74ac8a5d2e00 100644 --- a/include/uapi/linux/tcp.h +++ b/include/uapi/linux/tcp.h @@ -298,6 +298,13 @@ struct tcp_info { __u32 tcpi_snd_wnd; /* peer's advertised receive window after * scaling (bytes) */
- __u32 tcpi_received_ce; /* # of CE marks received */
- __u32 tcpi_delivered_e1_bytes; /* Accurate ECN byte counters */
- __u32 tcpi_delivered_e0_bytes;
- __u32 tcpi_delivered_ce_bytes;
- __u32 tcpi_received_e1_bytes;
- __u32 tcpi_received_e0_bytes;
- __u32 tcpi_received_ce_bytes;
This will break uAPI: new fields must be addded at the end, or must fill existing holes. Also u32 set in stone in uAPI for a byte counter looks way too small.
@@ -5100,7 +5113,7 @@ static void __init tcp_struct_check(void) /* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */
- CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 109 + 7);
- CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 122 + 6);
The above means an additional cacheline in fast-path WRT the current status. IMHO should be avoided.
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 5bd7fc9bcf66..41e45b9aff3f 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -70,6 +70,7 @@ #include <linux/sysctl.h> #include <linux/kernel.h> #include <linux/prefetch.h> +#include <linux/bitops.h> #include <net/dst.h> #include <net/tcp.h> #include <net/proto_memory.h> @@ -499,6 +500,144 @@ static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr return false; } +/* Maps IP ECN field ECT/CE code point to AccECN option field number, given
- we are sending fields with Accurate ECN Order 1: ECT(1), CE, ECT(0).
- */
+static u8 tcp_ecnfield_to_accecn_optfield(u8 ecnfield) +{
- switch (ecnfield) {
- case INET_ECN_NOT_ECT:
return 0; /* AccECN does not send counts of NOT_ECT */
- case INET_ECN_ECT_1:
return 1;
- case INET_ECN_CE:
return 2;
- case INET_ECN_ECT_0:
return 3;
- default:
WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield);
No WARN_ONCE() above please: either the 'ecnfield' data is masked vs INET_ECN_MASK and the WARN_ONCE should not be possible or a remote sender can deterministically trigger a WARN() which nowadays will in turn raise a CVE...
[...]
+static u32 tcp_accecn_field_init_offset(u8 ecnfield) +{
- switch (ecnfield) {
- case INET_ECN_NOT_ECT:
return 0; /* AccECN does not send counts of NOT_ECT */
- case INET_ECN_ECT_1:
return TCP_ACCECN_E1B_INIT_OFFSET;
- case INET_ECN_CE:
return TCP_ACCECN_CEB_INIT_OFFSET;
- case INET_ECN_ECT_0:
return TCP_ACCECN_E0B_INIT_OFFSET;
- default:
WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield);
Same as above.
- }
- return 0;
+}
+/* Maps AccECN option field #nr to IP ECN field ECT/CE bits */ +static unsigned int tcp_accecn_optfield_to_ecnfield(unsigned int optfield,
bool order)
+{
- u8 tmp;
- optfield = order ? 2 - optfield : optfield;
- tmp = optfield + 2;
- return (tmp + (tmp >> 2)) & INET_ECN_MASK;
+}
+/* Handles AccECN option ECT and CE 24-bit byte counters update into
- the u32 value in tcp_sock. As we're processing TCP options, it is
- safe to access from - 1.
- */
+static s32 tcp_update_ecn_bytes(u32 *cnt, const char *from, u32 init_offset) +{
- u32 truncated = (get_unaligned_be32(from - 1) - init_offset) &
0xFFFFFFU;
- u32 delta = (truncated - *cnt) & 0xFFFFFFU;
- /* If delta has the highest bit set (24th bit) indicating
* negative, sign extend to correct an estimation using
* sign_extend32(delta, 24 - 1)
*/
- delta = sign_extend32(delta, 23);
- *cnt += delta;
- return (s32)delta;
+}
+/* Returns true if the byte counters can be used */ +static bool tcp_accecn_process_option(struct tcp_sock *tp,
const struct sk_buff *skb,
u32 delivered_bytes, int flag)
+{
- u8 estimate_ecnfield = tp->est_ecnfield;
- bool ambiguous_ecn_bytes_incr = false;
- bool first_changed = false;
- unsigned int optlen;
- unsigned char *ptr;
u8 would we more appropriate type for binary data.
- bool order1, res;
- unsigned int i;
- if (!(flag & FLAG_SLOWPATH) || !tp->rx_opt.accecn) {
if (estimate_ecnfield) {
u8 ecnfield = estimate_ecnfield - 1;
tp->delivered_ecn_bytes[ecnfield] += delivered_bytes;
return true;
}
return false;
- }
- ptr = skb_transport_header(skb) + tp->rx_opt.accecn;
- optlen = ptr[1] - 2;
This assumes optlen is greater then 2, but I don't see the relevant check.
The options parser should check that, please see the "silly options" check.
Are tcp options present at all?
There is !tp->rx_opt.accecn check above which should ensure we're processing only AccECN Option that is present.
- WARN_ON_ONCE(ptr[0] != TCPOPT_ACCECN0 && ptr[0] != TCPOPT_ACCECN1);
Please, don't warn for arbitrary wrong data sent from the peer.
If there isn't AccECN option at ptr, there's bug elsewhere in the code (in the option parse code). So this is an internal sanity check that tp->rx_opt.accecn points to AccECN option for real like it should.
If you still want that removed, no problem but it's should not be arbitrary data at this point because the options parsing code should have validated this condition already, thus WARN_ON_ONCE() seemed appropriate to me.
- order1 = (ptr[0] == TCPOPT_ACCECN1);
- ptr += 2;
- res = !!estimate_ecnfield;
- for (i = 0; i < 3; i++) {
if (optlen >= TCPOLEN_ACCECN_PERFIELD) {
It's easy to reverse logic here and use continue, which buys one level of indentation.
u32 init_offset;
u8 ecnfield;
s32 delta;
u32 *cnt;
ecnfield = tcp_accecn_optfield_to_ecnfield(i, order1);
init_offset = tcp_accecn_field_init_offset(ecnfield);
cnt = &tp->delivered_ecn_bytes[ecnfield - 1];
delta = tcp_update_ecn_bytes(cnt, ptr, init_offset);
if (delta) {
if (delta < 0) {
res = false;
ambiguous_ecn_bytes_incr = true;
}
if (ecnfield != estimate_ecnfield) {
if (!first_changed) {
tp->est_ecnfield = ecnfield;
first_changed = true;
} else {
res = false;
ambiguous_ecn_bytes_incr = true;
}
At least 2 indentation levels above the maximum readable.
[...]
@@ -4378,6 +4524,7 @@ void tcp_parse_options(const struct net *net, ptr = (const unsigned char *)(th + 1); opt_rx->saw_tstamp = 0;
- opt_rx->accecn = 0; opt_rx->saw_unknown = 0;
It would be good to be able to zero both 'accecn' and 'saw_unknown' with a single statement.
[...]
@@ -766,6 +769,47 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, *ptr++ = htonl(opts->tsecr); }
- if (OPTION_ACCECN & options) {
const u8 ect0_idx = INET_ECN_ECT_0 - 1;
const u8 ect1_idx = INET_ECN_ECT_1 - 1;
const u8 ce_idx = INET_ECN_CE - 1;
u32 e0b;
u32 e1b;
u32 ceb;
u8 len;
e0b = opts->ecn_bytes[ect0_idx] + TCP_ACCECN_E0B_INIT_OFFSET;
e1b = opts->ecn_bytes[ect1_idx] + TCP_ACCECN_E1B_INIT_OFFSET;
ceb = opts->ecn_bytes[ce_idx] + TCP_ACCECN_CEB_INIT_OFFSET;
len = TCPOLEN_ACCECN_BASE +
opts->num_accecn_fields * TCPOLEN_ACCECN_PERFIELD;
if (opts->num_accecn_fields == 2) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
*ptr++ = htonl(((e1b & 0xff) << 24) |
(ceb & 0xffffff));
} else if (opts->num_accecn_fields == 1) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
leftover_bytes = ((e1b & 0xff) << 8) |
TCPOPT_NOP;
leftover_size = 1;
} else if (opts->num_accecn_fields == 0) {
leftover_bytes = (TCPOPT_ACCECN1 << 8) | len;
leftover_size = 2;
} else if (opts->num_accecn_fields == 3) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
*ptr++ = htonl(((e1b & 0xff) << 24) |
(ceb & 0xffffff));
*ptr++ = htonl(((e0b & 0xffffff) << 8) |
TCPOPT_NOP);
The above chunck and the contents of patch 7 must be in the same patch. This split makes the review even harder.
[...]
@@ -1117,6 +1235,17 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb opts->num_sack_blocks = 0; }
- if (tcp_ecn_mode_accecn(tp) &&
sock_net(sk)->ipv4.sysctl_tcp_ecn_option) {
int saving = opts->num_sack_blocks > 0 ? 2 : 0;
int remaining = MAX_TCP_OPTION_SPACE - size;
AFACS the above means tcp_options_fit_accecn() must clear any already set options, but apparently it does not do so. Have you tested with something adding largish options like mptcp?
This "fitting" for AccEcn option is not to make room for the option but to check if AccECN option fits and in what length, and how it can take advantage of some nop bytes when available to save option space.
-----Original Message----- From: Ilpo Järvinen ij@kernel.org Sent: Tuesday, May 6, 2025 12:54 AM To: Paolo Abeni pabeni@redhat.com Cc: Chia-Yu Chang (Nokia) chia-yu.chang@nokia-bell-labs.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Subject: Re: [PATCH v5 net-next 09/15] tcp: accecn: AccECN option
CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.
On Tue, 29 Apr 2025, Paolo Abeni wrote:
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -302,10 +303,13 @@ struct tcp_sock { u32 snd_up; /* Urgent pointer */ u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */
- u32 delivered_ecn_bytes[3];
This new fields do not belong to this cacheline group. I'm unsure they belong to fast-path at all. Also u32 will wrap-around very soon.
[...]
diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h index dc8fdc80e16b..74ac8a5d2e00 100644 --- a/include/uapi/linux/tcp.h +++ b/include/uapi/linux/tcp.h @@ -298,6 +298,13 @@ struct tcp_info { __u32 tcpi_snd_wnd; /* peer's advertised receive window after * scaling (bytes) */
- __u32 tcpi_received_ce; /* # of CE marks received */
- __u32 tcpi_delivered_e1_bytes; /* Accurate ECN byte counters */
- __u32 tcpi_delivered_e0_bytes;
- __u32 tcpi_delivered_ce_bytes;
- __u32 tcpi_received_e1_bytes;
- __u32 tcpi_received_e0_bytes;
- __u32 tcpi_received_ce_bytes;
This will break uAPI: new fields must be addded at the end, or must fill existing holes. Also u32 set in stone in uAPI for a byte counter looks way too small.
@@ -5100,7 +5113,7 @@ static void __init tcp_struct_check(void) /* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */
- CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 109 + 7);
- CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock,
- tcp_sock_write_txrx, 122 + 6);
The above means an additional cacheline in fast-path WRT the current status. IMHO should be avoided.
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 5bd7fc9bcf66..41e45b9aff3f 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -70,6 +70,7 @@ #include <linux/sysctl.h> #include <linux/kernel.h> #include <linux/prefetch.h> +#include <linux/bitops.h> #include <net/dst.h> #include <net/tcp.h> #include <net/proto_memory.h> @@ -499,6 +500,144 @@ static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr return false; }
+/* Maps IP ECN field ECT/CE code point to AccECN option field +number, given
- we are sending fields with Accurate ECN Order 1: ECT(1), CE, ECT(0).
- */
+static u8 tcp_ecnfield_to_accecn_optfield(u8 ecnfield) {
- switch (ecnfield) {
- case INET_ECN_NOT_ECT:
return 0; /* AccECN does not send counts of NOT_ECT */
- case INET_ECN_ECT_1:
return 1;
- case INET_ECN_CE:
return 2;
- case INET_ECN_ECT_0:
return 3;
- default:
WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield);
No WARN_ONCE() above please: either the 'ecnfield' data is masked vs INET_ECN_MASK and the WARN_ONCE should not be possible or a remote sender can deterministically trigger a WARN() which nowadays will in turn raise a CVE...
[...]
+static u32 tcp_accecn_field_init_offset(u8 ecnfield) {
- switch (ecnfield) {
- case INET_ECN_NOT_ECT:
return 0; /* AccECN does not send counts of NOT_ECT */
- case INET_ECN_ECT_1:
return TCP_ACCECN_E1B_INIT_OFFSET;
- case INET_ECN_CE:
return TCP_ACCECN_CEB_INIT_OFFSET;
- case INET_ECN_ECT_0:
return TCP_ACCECN_E0B_INIT_OFFSET;
- default:
WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield);
Same as above.
- }
- return 0;
+}
+/* Maps AccECN option field #nr to IP ECN field ECT/CE bits */ +static unsigned int tcp_accecn_optfield_to_ecnfield(unsigned int optfield,
bool order) {
- u8 tmp;
- optfield = order ? 2 - optfield : optfield;
- tmp = optfield + 2;
- return (tmp + (tmp >> 2)) & INET_ECN_MASK; }
+/* Handles AccECN option ECT and CE 24-bit byte counters update +into
- the u32 value in tcp_sock. As we're processing TCP options, it
+is
- safe to access from - 1.
- */
+static s32 tcp_update_ecn_bytes(u32 *cnt, const char *from, u32 +init_offset) {
- u32 truncated = (get_unaligned_be32(from - 1) - init_offset) &
0xFFFFFFU;
- u32 delta = (truncated - *cnt) & 0xFFFFFFU;
- /* If delta has the highest bit set (24th bit) indicating
- negative, sign extend to correct an estimation using
- sign_extend32(delta, 24 - 1)
- */
- delta = sign_extend32(delta, 23);
- *cnt += delta;
- return (s32)delta;
+}
+/* Returns true if the byte counters can be used */ static bool +tcp_accecn_process_option(struct tcp_sock *tp,
const struct sk_buff *skb,
u32 delivered_bytes, int flag) {
- u8 estimate_ecnfield = tp->est_ecnfield;
- bool ambiguous_ecn_bytes_incr = false;
- bool first_changed = false;
- unsigned int optlen;
- unsigned char *ptr;
u8 would we more appropriate type for binary data.
Hi Ilpo,
Not sure I understand your point, could you elaborate which binary data you think shall use u8?
- bool order1, res;
- unsigned int i;
- if (!(flag & FLAG_SLOWPATH) || !tp->rx_opt.accecn) {
if (estimate_ecnfield) {
u8 ecnfield = estimate_ecnfield - 1;
tp->delivered_ecn_bytes[ecnfield] += delivered_bytes;
return true;
}
return false;
- }
- ptr = skb_transport_header(skb) + tp->rx_opt.accecn;
- optlen = ptr[1] - 2;
This assumes optlen is greater then 2, but I don't see the relevant check.
The options parser should check that, please see the "silly options" check.
Are tcp options present at all?
There is !tp->rx_opt.accecn check above which should ensure we're processing only AccECN Option that is present.
- WARN_ON_ONCE(ptr[0] != TCPOPT_ACCECN0 && ptr[0] !=
- TCPOPT_ACCECN1);
Please, don't warn for arbitrary wrong data sent from the peer.
If there isn't AccECN option at ptr, there's bug elsewhere in the code (in the option parse code). So this is an internal sanity check that tp->rx_opt.accecn points to AccECN option for real like it should.
If you still want that removed, no problem but it's should not be arbitrary data at this point because the options parsing code should have validated this condition already, thus WARN_ON_ONCE() seemed appropriate to me.
Indeed, then I will keep this for next version, but can be adjust once receiving further feedback.
- order1 = (ptr[0] == TCPOPT_ACCECN1);
- ptr += 2;
- res = !!estimate_ecnfield;
- for (i = 0; i < 3; i++) {
if (optlen >= TCPOLEN_ACCECN_PERFIELD) {
It's easy to reverse logic here and use continue, which buys one level of indentation.
Sure, thanks for explicit suggestion, will do.
Chia-Yu
u32 init_offset;
u8 ecnfield;
s32 delta;
u32 *cnt;
ecnfield = tcp_accecn_optfield_to_ecnfield(i, order1);
init_offset = tcp_accecn_field_init_offset(ecnfield);
cnt = &tp->delivered_ecn_bytes[ecnfield - 1];
delta = tcp_update_ecn_bytes(cnt, ptr, init_offset);
if (delta) {
if (delta < 0) {
res = false;
ambiguous_ecn_bytes_incr = true;
}
if (ecnfield != estimate_ecnfield) {
if (!first_changed) {
tp->est_ecnfield = ecnfield;
first_changed = true;
} else {
res = false;
ambiguous_ecn_bytes_incr = true;
}
At least 2 indentation levels above the maximum readable.
[...]
@@ -4378,6 +4524,7 @@ void tcp_parse_options(const struct net *net,
ptr = (const unsigned char *)(th + 1); opt_rx->saw_tstamp = 0;
- opt_rx->accecn = 0; opt_rx->saw_unknown = 0;
It would be good to be able to zero both 'accecn' and 'saw_unknown' with a single statement.
[...]
@@ -766,6 +769,47 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, *ptr++ = htonl(opts->tsecr); }
- if (OPTION_ACCECN & options) {
const u8 ect0_idx = INET_ECN_ECT_0 - 1;
const u8 ect1_idx = INET_ECN_ECT_1 - 1;
const u8 ce_idx = INET_ECN_CE - 1;
u32 e0b;
u32 e1b;
u32 ceb;
u8 len;
e0b = opts->ecn_bytes[ect0_idx] + TCP_ACCECN_E0B_INIT_OFFSET;
e1b = opts->ecn_bytes[ect1_idx] + TCP_ACCECN_E1B_INIT_OFFSET;
ceb = opts->ecn_bytes[ce_idx] + TCP_ACCECN_CEB_INIT_OFFSET;
len = TCPOLEN_ACCECN_BASE +
opts->num_accecn_fields * TCPOLEN_ACCECN_PERFIELD;
if (opts->num_accecn_fields == 2) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
*ptr++ = htonl(((e1b & 0xff) << 24) |
(ceb & 0xffffff));
} else if (opts->num_accecn_fields == 1) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
leftover_bytes = ((e1b & 0xff) << 8) |
TCPOPT_NOP;
leftover_size = 1;
} else if (opts->num_accecn_fields == 0) {
leftover_bytes = (TCPOPT_ACCECN1 << 8) | len;
leftover_size = 2;
} else if (opts->num_accecn_fields == 3) {
*ptr++ = htonl((TCPOPT_ACCECN1 << 24) | (len << 16) |
((e1b >> 8) & 0xffff));
*ptr++ = htonl(((e1b & 0xff) << 24) |
(ceb & 0xffffff));
*ptr++ = htonl(((e0b & 0xffffff) << 8) |
TCPOPT_NOP);
The above chunck and the contents of patch 7 must be in the same patch. This split makes the review even harder.
[...]
@@ -1117,6 +1235,17 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb opts->num_sack_blocks = 0; }
- if (tcp_ecn_mode_accecn(tp) &&
sock_net(sk)->ipv4.sysctl_tcp_ecn_option) {
int saving = opts->num_sack_blocks > 0 ? 2 : 0;
int remaining = MAX_TCP_OPTION_SPACE - size;
AFACS the above means tcp_options_fit_accecn() must clear any already set options, but apparently it does not do so. Have you tested with something adding largish options like mptcp?
This "fitting" for AccEcn option is not to make room for the option but to check if AccECN option fits and in what length, and how it can take advantage of some nop bytes when available to save option space.
-- i.
On Tue, 6 May 2025, Chia-Yu Chang (Nokia) wrote:
-----Original Message----- From: Ilpo Järvinen ij@kernel.org Sent: Tuesday, May 6, 2025 12:54 AM To: Paolo Abeni pabeni@redhat.com Cc: Chia-Yu Chang (Nokia) chia-yu.chang@nokia-bell-labs.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Subject: Re: [PATCH v5 net-next 09/15] tcp: accecn: AccECN option
CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.
On Tue, 29 Apr 2025, Paolo Abeni wrote:
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -302,10 +303,13 @@ struct tcp_sock { u32 snd_up; /* Urgent pointer */ u32 delivered; /* Total data packets delivered incl. rexmits */ u32 delivered_ce; /* Like the above but only ECE marked packets */
- u32 delivered_ecn_bytes[3];
This new fields do not belong to this cacheline group. I'm unsure they belong to fast-path at all. Also u32 will wrap-around very soon.
[...]
diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h index dc8fdc80e16b..74ac8a5d2e00 100644 --- a/include/uapi/linux/tcp.h +++ b/include/uapi/linux/tcp.h @@ -298,6 +298,13 @@ struct tcp_info { __u32 tcpi_snd_wnd; /* peer's advertised receive window after * scaling (bytes) */
- __u32 tcpi_received_ce; /* # of CE marks received */
- __u32 tcpi_delivered_e1_bytes; /* Accurate ECN byte counters */
- __u32 tcpi_delivered_e0_bytes;
- __u32 tcpi_delivered_ce_bytes;
- __u32 tcpi_received_e1_bytes;
- __u32 tcpi_received_e0_bytes;
- __u32 tcpi_received_ce_bytes;
This will break uAPI: new fields must be addded at the end, or must fill existing holes. Also u32 set in stone in uAPI for a byte counter looks way too small.
@@ -5100,7 +5113,7 @@ static void __init tcp_struct_check(void) /* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */
- CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 109 + 7);
- CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock,
- tcp_sock_write_txrx, 122 + 6);
The above means an additional cacheline in fast-path WRT the current status. IMHO should be avoided.
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 5bd7fc9bcf66..41e45b9aff3f 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -70,6 +70,7 @@ #include <linux/sysctl.h> #include <linux/kernel.h> #include <linux/prefetch.h> +#include <linux/bitops.h> #include <net/dst.h> #include <net/tcp.h> #include <net/proto_memory.h> @@ -499,6 +500,144 @@ static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr return false; }
+/* Maps IP ECN field ECT/CE code point to AccECN option field +number, given
- we are sending fields with Accurate ECN Order 1: ECT(1), CE, ECT(0).
- */
+static u8 tcp_ecnfield_to_accecn_optfield(u8 ecnfield) {
- switch (ecnfield) {
- case INET_ECN_NOT_ECT:
return 0; /* AccECN does not send counts of NOT_ECT */
- case INET_ECN_ECT_1:
return 1;
- case INET_ECN_CE:
return 2;
- case INET_ECN_ECT_0:
return 3;
- default:
WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield);
No WARN_ONCE() above please: either the 'ecnfield' data is masked vs INET_ECN_MASK and the WARN_ONCE should not be possible or a remote sender can deterministically trigger a WARN() which nowadays will in turn raise a CVE...
[...]
+static u32 tcp_accecn_field_init_offset(u8 ecnfield) {
- switch (ecnfield) {
- case INET_ECN_NOT_ECT:
return 0; /* AccECN does not send counts of NOT_ECT */
- case INET_ECN_ECT_1:
return TCP_ACCECN_E1B_INIT_OFFSET;
- case INET_ECN_CE:
return TCP_ACCECN_CEB_INIT_OFFSET;
- case INET_ECN_ECT_0:
return TCP_ACCECN_E0B_INIT_OFFSET;
- default:
WARN_ONCE(1, "bad ECN code point: %d\n", ecnfield);
Same as above.
- }
- return 0;
+}
+/* Maps AccECN option field #nr to IP ECN field ECT/CE bits */ +static unsigned int tcp_accecn_optfield_to_ecnfield(unsigned int optfield,
bool order) {
- u8 tmp;
- optfield = order ? 2 - optfield : optfield;
- tmp = optfield + 2;
- return (tmp + (tmp >> 2)) & INET_ECN_MASK; }
+/* Handles AccECN option ECT and CE 24-bit byte counters update +into
- the u32 value in tcp_sock. As we're processing TCP options, it
+is
- safe to access from - 1.
- */
+static s32 tcp_update_ecn_bytes(u32 *cnt, const char *from, u32 +init_offset) {
- u32 truncated = (get_unaligned_be32(from - 1) - init_offset) &
0xFFFFFFU;
- u32 delta = (truncated - *cnt) & 0xFFFFFFU;
- /* If delta has the highest bit set (24th bit) indicating
- negative, sign extend to correct an estimation using
- sign_extend32(delta, 24 - 1)
- */
- delta = sign_extend32(delta, 23);
- *cnt += delta;
- return (s32)delta;
+}
+/* Returns true if the byte counters can be used */ static bool +tcp_accecn_process_option(struct tcp_sock *tp,
const struct sk_buff *skb,
u32 delivered_bytes, int flag) {
- u8 estimate_ecnfield = tp->est_ecnfield;
- bool ambiguous_ecn_bytes_incr = false;
- bool first_changed = false;
- unsigned int optlen;
- unsigned char *ptr;
u8 would we more appropriate type for binary data.
Hi Ilpo,
Not sure I understand your point, could you elaborate which binary data you think shall use u8?
The header/option is binary data so u8 seems the right type for it. So:
u8 *ptr;
-- i.
On Tue, 6 May 2025, Ilpo Järvinen wrote:
On Tue, 29 Apr 2025, Paolo Abeni wrote:
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -1117,6 +1235,17 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb opts->num_sack_blocks = 0; }
- if (tcp_ecn_mode_accecn(tp) &&
sock_net(sk)->ipv4.sysctl_tcp_ecn_option) {
int saving = opts->num_sack_blocks > 0 ? 2 : 0;
int remaining = MAX_TCP_OPTION_SPACE - size;
AFACS the above means tcp_options_fit_accecn() must clear any already set options, but apparently it does not do so. Have you tested with something adding largish options like mptcp?
This "fitting" for AccEcn option is not to make room for the option but to check if AccECN option fits and in what length, and how it can take advantage of some nop bytes when available to save option space.
A minor correction. SACK blocks will naturally fill the entire option space if there are enough holes which would "starve" AccECN from using option space during loss recovery. Thus, AccECN option is allowed allowed grab some of that space from SACK. There's redundancy in SACK blocks anyway so it shouldn't usually impact SACK signal much.
From: Ilpo Järvinen ij@kernel.org
Instead of sending the option in every ACK, limit sending to those ACKs where the option is necessary: - Handshake - "Change-triggered ACK" + the ACK following it. The 2nd ACK is necessary to unambiguously indicate which of the ECN byte counters in increasing. The first ACK has two counters increasing due to the ecnfield edge. - ACKs with CE to allow CEP delta validations to take advantage of the option. - Force option to be sent every at least once per 2^22 bytes. The check is done using the bit edges of the byte counters (avoids need for extra variables). - AccECN option beacon to send a few times per RTT even if nothing in the ECN state requires that. The default is 3 times per RTT, and its period can be set via sysctl_tcp_ecn_option_beacon.
Signed-off-by: Ilpo Järvinen ij@kernel.org Co-developed-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- include/linux/tcp.h | 3 +++ include/net/netns/ipv4.h | 1 + include/net/tcp.h | 1 + net/ipv4/sysctl_net_ipv4.c | 9 ++++++++ net/ipv4/tcp.c | 5 ++++- net/ipv4/tcp_input.c | 36 +++++++++++++++++++++++++++++++- net/ipv4/tcp_ipv4.c | 1 + net/ipv4/tcp_minisocks.c | 2 ++ net/ipv4/tcp_output.c | 42 ++++++++++++++++++++++++++++++-------- 9 files changed, 90 insertions(+), 10 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 0e032d9631ac..acb0727855f8 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -309,8 +309,11 @@ struct tcp_sock { u8 received_ce_pending:4, /* Not yet transmit cnt of received_ce */ unused2:4; u8 accecn_minlen:2,/* Minimum length of AccECN option sent */ + prev_ecnfield:2,/* ECN bits from the previous segment */ + accecn_opt_demand:2,/* Demand AccECN option for n next ACKs */ est_ecnfield:2;/* ECN field for AccECN delivered estimates */ u32 app_limited; /* limited until "delivered" reaches this val */ + u64 accecn_opt_tstamp; /* Last AccECN option sent timestamp */ u32 rcv_wnd; /* Current receiver window */ /* * Options received (usually on last packet, some only on SYN packets). diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h index 4569a9ef4fb8..ff8b5b56ad00 100644 --- a/include/net/netns/ipv4.h +++ b/include/net/netns/ipv4.h @@ -149,6 +149,7 @@ struct netns_ipv4 {
u8 sysctl_tcp_ecn; u8 sysctl_tcp_ecn_option; + u8 sysctl_tcp_ecn_option_beacon; u8 sysctl_tcp_ecn_fallback;
u8 sysctl_ip_default_ttl; diff --git a/include/net/tcp.h b/include/net/tcp.h index bfff2a9f95bf..3ee5b52441e3 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1068,6 +1068,7 @@ static inline void tcp_accecn_init_counters(struct tcp_sock *tp) __tcp_accecn_init_bytes_counters(tp->received_ecn_bytes); __tcp_accecn_init_bytes_counters(tp->delivered_ecn_bytes); tp->accecn_minlen = 0; + tp->accecn_opt_demand = 0; tp->est_ecnfield = 0; }
diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c index 1d7fd86ca7b9..3ceefd2a77d7 100644 --- a/net/ipv4/sysctl_net_ipv4.c +++ b/net/ipv4/sysctl_net_ipv4.c @@ -740,6 +740,15 @@ static struct ctl_table ipv4_net_table[] = { .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_TWO, }, + { + .procname = "tcp_ecn_option_beacon", + .data = &init_net.ipv4.sysctl_tcp_ecn_option_beacon, + .maxlen = sizeof(u8), + .mode = 0644, + .proc_handler = proc_dou8vec_minmax, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_FOUR, + }, { .procname = "tcp_ecn_fallback", .data = &init_net.ipv4.sysctl_tcp_ecn_fallback, diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 89799f73c451..a712643a934e 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -3368,6 +3368,8 @@ int tcp_disconnect(struct sock *sk, int flags) tp->wait_third_ack = 0; tp->accecn_fail_mode = 0; tcp_accecn_init_counters(tp); + tp->prev_ecnfield = 0; + tp->accecn_opt_tstamp = 0; if (icsk->icsk_ca_initialized && icsk->icsk_ca_ops->release) icsk->icsk_ca_ops->release(sk); memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv)); @@ -5107,13 +5109,14 @@ static void __init tcp_struct_check(void) CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, received_ce); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, received_ecn_bytes); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, app_limited); + CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, accecn_opt_tstamp); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, rcv_wnd); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, rx_opt);
/* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */ - CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 122 + 6); + CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 130 + 6);
/* RX read-write hotpath cache lines */ CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_rx, bytes_received); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 41e45b9aff3f..1e8e49881ca4 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -466,6 +466,7 @@ static void tcp_ecn_rcv_synack(struct sock *sk, const struct tcphdr *th, default: tcp_ecn_mode_set(tp, TCP_ECN_MODE_ACCECN); tp->syn_ect_rcv = ip_dsfield & INET_ECN_MASK; + tp->accecn_opt_demand = 2; if (INET_ECN_is_ce(ip_dsfield) && tcp_accecn_validate_syn_feedback(sk, ace, tp->syn_ect_snt)) { @@ -486,6 +487,7 @@ static void tcp_ecn_rcv_syn(struct tcp_sock *tp, const struct tcphdr *th, } else { tp->syn_ect_rcv = TCP_SKB_CB(skb)->ip_dsfield & INET_ECN_MASK; + tp->prev_ecnfield = tp->syn_ect_rcv; tcp_ecn_mode_set(tp, TCP_ECN_MODE_ACCECN); } } @@ -6278,6 +6280,7 @@ void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb, u8 ecnfield = TCP_SKB_CB(skb)->ip_dsfield & INET_ECN_MASK; u8 is_ce = INET_ECN_is_ce(ecnfield); struct tcp_sock *tp = tcp_sk(sk); + bool ecn_edge;
if (!INET_ECN_is_not_ect(ecnfield)) { u32 pcount = is_ce * max_t(u16, 1, skb_shinfo(skb)->gso_segs); @@ -6291,9 +6294,36 @@ void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb,
if (payload_len > 0) { u8 minlen = tcp_ecnfield_to_accecn_optfield(ecnfield); + u32 oldbytes = tp->received_ecn_bytes[ecnfield - 1]; + tp->received_ecn_bytes[ecnfield - 1] += payload_len; tp->accecn_minlen = max_t(u8, tp->accecn_minlen, minlen); + + /* Demand AccECN option at least every 2^22 bytes to + * avoid overflowing the ECN byte counters. + */ + if ((tp->received_ecn_bytes[ecnfield - 1] ^ oldbytes) & + ~((1 << 22) - 1)) { + u8 opt_demand = max_t(u8, 1, + tp->accecn_opt_demand); + + tp->accecn_opt_demand = opt_demand; + } + } + } + + ecn_edge = tp->prev_ecnfield != ecnfield; + if (ecn_edge || is_ce) { + tp->prev_ecnfield = ecnfield; + /* Demand Accurate ECN change-triggered ACKs. Two ACK are + * demanded to indicate unambiguously the ecnfield value + * in the latter ACK. + */ + if (tcp_ecn_mode_accecn(tp)) { + if (ecn_edge) + inet_csk(sk)->icsk_ack.pending |= ICSK_ACK_NOW; + tp->accecn_opt_demand = 2; } } } @@ -6426,8 +6456,12 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb, * RFC 5961 4.2 : Send a challenge ack */ if (th->syn) { - if (tcp_ecn_mode_accecn(tp)) + if (tcp_ecn_mode_accecn(tp)) { + u8 opt_demand = max_t(u8, 1, tp->accecn_opt_demand); + send_accecn_reflector = true; + tp->accecn_opt_demand = opt_demand; + } if (sk->sk_state == TCP_SYN_RECV && sk->sk_socket && th->ack && TCP_SKB_CB(skb)->seq + 1 == TCP_SKB_CB(skb)->end_seq && TCP_SKB_CB(skb)->seq + 1 == tp->rcv_nxt && diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 3f3e285fc973..2e95dad66fe3 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -3451,6 +3451,7 @@ static int __net_init tcp_sk_init(struct net *net) { net->ipv4.sysctl_tcp_ecn = 2; net->ipv4.sysctl_tcp_ecn_option = 2; + net->ipv4.sysctl_tcp_ecn_option_beacon = 3; net->ipv4.sysctl_tcp_ecn_fallback = 1;
net->ipv4.sysctl_tcp_base_mss = TCP_BASE_MSS; diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index 3f8225bae49f..e0f2bd2cee9e 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -501,6 +501,8 @@ static void tcp_ecn_openreq_child(struct sock *sk, tcp_ecn_mode_set(tp, TCP_ECN_MODE_ACCECN); tp->syn_ect_snt = treq->syn_ect_snt; tcp_accecn_third_ack(sk, skb, treq->syn_ect_snt); + tp->prev_ecnfield = treq->syn_ect_rcv; + tp->accecn_opt_demand = 1; tcp_ecn_received_counters(sk, skb, skb->len - th->doff * 4); } else { tcp_ecn_mode_set(tp, inet_rsk(req)->ecn_ok ? diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index a36de6c539da..a76061dc4e5f 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -806,8 +806,12 @@ static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp, *ptr++ = htonl(((e0b & 0xffffff) << 8) | TCPOPT_NOP); } - if (tp) + if (tp) { tp->accecn_minlen = 0; + tp->accecn_opt_tstamp = tp->tcp_mstamp; + if (tp->accecn_opt_demand) + tp->accecn_opt_demand--; + } }
if (unlikely(OPTION_SACK_ADVERTISE & options)) { @@ -984,6 +988,18 @@ static int tcp_options_fit_accecn(struct tcp_out_options *opts, int required, return size; }
+static bool tcp_accecn_option_beacon_check(const struct sock *sk) +{ + const struct tcp_sock *tp = tcp_sk(sk); + + if (!sock_net(sk)->ipv4.sysctl_tcp_ecn_option_beacon) + return false; + + return tcp_stamp_us_delta(tp->tcp_mstamp, tp->accecn_opt_tstamp) * + sock_net(sk)->ipv4.sysctl_tcp_ecn_option_beacon >= + (tp->srtt_us >> 3); +} + /* Compute TCP options for SYN packets. This is not the final * network wire format yet. */ @@ -1237,13 +1253,18 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb
if (tcp_ecn_mode_accecn(tp) && sock_net(sk)->ipv4.sysctl_tcp_ecn_option) { - int saving = opts->num_sack_blocks > 0 ? 2 : 0; - int remaining = MAX_TCP_OPTION_SPACE - size; - - opts->ecn_bytes = tp->received_ecn_bytes; - size += tcp_options_fit_accecn(opts, tp->accecn_minlen, - remaining, - saving); + if (sock_net(sk)->ipv4.sysctl_tcp_ecn_option >= 2 || + tp->accecn_opt_demand || + tcp_accecn_option_beacon_check(sk)) { + int saving = opts->num_sack_blocks > 0 ? 2 : 0; + int remaining = MAX_TCP_OPTION_SPACE - size; + + opts->ecn_bytes = tp->received_ecn_bytes; + size += tcp_options_fit_accecn(opts, + tp->accecn_minlen, + remaining, + saving); + } }
if (unlikely(BPF_SOCK_OPS_TEST_FLAG(tp, @@ -2959,6 +2980,11 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle, sent_pkts = 0;
tcp_mstamp_refresh(tp); + + /* AccECN option beacon depends on mstamp, it may change mss */ + if (tcp_ecn_mode_accecn(tp) && tcp_accecn_option_beacon_check(sk)) + mss_now = tcp_current_mss(sk); + if (!push_one) { /* Do MTU probing. */ result = tcp_mtu_probe(sk);
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
From: Ilpo Järvinen ij@kernel.org
Instead of sending the option in every ACK, limit sending to those ACKs where the option is necessary:
- Handshake
- "Change-triggered ACK" + the ACK following it. The 2nd ACK is necessary to unambiguously indicate which of the ECN byte counters in increasing. The first ACK has two counters increasing due to the ecnfield edge.
- ACKs with CE to allow CEP delta validations to take advantage of the option.
- Force option to be sent every at least once per 2^22 bytes. The check is done using the bit edges of the byte counters (avoids need for extra variables).
- AccECN option beacon to send a few times per RTT even if nothing in the ECN state requires that. The default is 3 times per RTT, and its period can be set via sysctl_tcp_ecn_option_beacon.
Signed-off-by: Ilpo Järvinen ij@kernel.org Co-developed-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com
include/linux/tcp.h | 3 +++ include/net/netns/ipv4.h | 1 + include/net/tcp.h | 1 + net/ipv4/sysctl_net_ipv4.c | 9 ++++++++ net/ipv4/tcp.c | 5 ++++- net/ipv4/tcp_input.c | 36 +++++++++++++++++++++++++++++++- net/ipv4/tcp_ipv4.c | 1 + net/ipv4/tcp_minisocks.c | 2 ++ net/ipv4/tcp_output.c | 42 ++++++++++++++++++++++++++++++-------- 9 files changed, 90 insertions(+), 10 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 0e032d9631ac..acb0727855f8 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -309,8 +309,11 @@ struct tcp_sock { u8 received_ce_pending:4, /* Not yet transmit cnt of received_ce */ unused2:4; u8 accecn_minlen:2,/* Minimum length of AccECN option sent */
prev_ecnfield:2,/* ECN bits from the previous segment */
est_ecnfield:2;/* ECN field for AccECN delivered estimates */ u32 app_limited; /* limited until "delivered" reaches this val */accecn_opt_demand:2,/* Demand AccECN option for n next ACKs */
- u64 accecn_opt_tstamp; /* Last AccECN option sent timestamp */
AFAICS this field is only access in the tx path, while this chunk belong to the tcp_sock_write_txrx group.
@@ -740,6 +740,15 @@ static struct ctl_table ipv4_net_table[] = { .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_TWO, },
- {
.procname = "tcp_ecn_option_beacon",
.data = &init_net.ipv4.sysctl_tcp_ecn_option_beacon,
.maxlen = sizeof(u8),
.mode = 0644,
.proc_handler = proc_dou8vec_minmax,
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_FOUR,
The number of new sysctl is concerning high, and I don't see any documentation update yet.
@@ -6291,9 +6294,36 @@ void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb, if (payload_len > 0) { u8 minlen = tcp_ecnfield_to_accecn_optfield(ecnfield);
u32 oldbytes = tp->received_ecn_bytes[ecnfield - 1];
tp->received_ecn_bytes[ecnfield - 1] += payload_len; tp->accecn_minlen = max_t(u8, tp->accecn_minlen, minlen);
/* Demand AccECN option at least every 2^22 bytes to
* avoid overflowing the ECN byte counters.
*/
if ((tp->received_ecn_bytes[ecnfield - 1] ^ oldbytes) &
~((1 << 22) - 1)) {
u8 opt_demand = max_t(u8, 1,
tp->accecn_opt_demand);
tp->accecn_opt_demand = opt_demand;
}
I guess this explains the u32 values for such counters. Some comments in the previous patch could be useful.
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 3f3e285fc973..2e95dad66fe3 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -3451,6 +3451,7 @@ static int __net_init tcp_sk_init(struct net *net) { net->ipv4.sysctl_tcp_ecn = 2; net->ipv4.sysctl_tcp_ecn_option = 2;
- net->ipv4.sysctl_tcp_ecn_option_beacon = 3; net->ipv4.sysctl_tcp_ecn_fallback = 1;
Human readable macros instead of magic numbers could help.
@@ -1237,13 +1253,18 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb if (tcp_ecn_mode_accecn(tp) && sock_net(sk)->ipv4.sysctl_tcp_ecn_option) {
int saving = opts->num_sack_blocks > 0 ? 2 : 0;
int remaining = MAX_TCP_OPTION_SPACE - size;
opts->ecn_bytes = tp->received_ecn_bytes;
size += tcp_options_fit_accecn(opts, tp->accecn_minlen,
remaining,
saving);
if (sock_net(sk)->ipv4.sysctl_tcp_ecn_option >= 2 ||
tp->accecn_opt_demand ||
tcp_accecn_option_beacon_check(sk)) {
Why a nested if here and just not expanding the existing one?
/P
-----Original Message----- From: Paolo Abeni pabeni@redhat.com Sent: Tuesday, April 29, 2025 2:10 PM To: Chia-Yu Chang (Nokia) chia-yu.chang@nokia-bell-labs.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ij@kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Subject: Re: [PATCH v5 net-next 10/15] tcp: accecn: AccECN option send control
CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
From: Ilpo Järvinen ij@kernel.org
Instead of sending the option in every ACK, limit sending to those ACKs where the option is necessary:
- Handshake
- "Change-triggered ACK" + the ACK following it. The 2nd ACK is necessary to unambiguously indicate which of the ECN byte counters in increasing. The first ACK has two counters increasing due to the ecnfield edge.
- ACKs with CE to allow CEP delta validations to take advantage of the option.
- Force option to be sent every at least once per 2^22 bytes. The check is done using the bit edges of the byte counters (avoids need for extra variables).
- AccECN option beacon to send a few times per RTT even if nothing in the ECN state requires that. The default is 3 times per RTT, and its period can be set via sysctl_tcp_ecn_option_beacon.
Signed-off-by: Ilpo Järvinen ij@kernel.org Co-developed-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com
include/linux/tcp.h | 3 +++ include/net/netns/ipv4.h | 1 + include/net/tcp.h | 1 + net/ipv4/sysctl_net_ipv4.c | 9 ++++++++ net/ipv4/tcp.c | 5 ++++- net/ipv4/tcp_input.c | 36 +++++++++++++++++++++++++++++++- net/ipv4/tcp_ipv4.c | 1 + net/ipv4/tcp_minisocks.c | 2 ++ net/ipv4/tcp_output.c | 42 ++++++++++++++++++++++++++++++-------- 9 files changed, 90 insertions(+), 10 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 0e032d9631ac..acb0727855f8 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -309,8 +309,11 @@ struct tcp_sock { u8 received_ce_pending:4, /* Not yet transmit cnt of received_ce */ unused2:4; u8 accecn_minlen:2,/* Minimum length of AccECN option sent */
prev_ecnfield:2,/* ECN bits from the previous segment */
accecn_opt_demand:2,/* Demand AccECN option for n next
- ACKs */ est_ecnfield:2;/* ECN field for AccECN delivered estimates */ u32 app_limited; /* limited until "delivered" reaches this val */
u64 accecn_opt_tstamp; /* Last AccECN option sent timestamp */
AFAICS this field is only access in the tx path, while this chunk belong to the tcp_sock_write_txrx group.
@@ -740,6 +740,15 @@ static struct ctl_table ipv4_net_table[] = { .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_TWO, },
{
.procname = "tcp_ecn_option_beacon",
.data = &init_net.ipv4.sysctl_tcp_ecn_option_beacon,
.maxlen = sizeof(u8),
.mode = 0644,
.proc_handler = proc_dou8vec_minmax,
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_FOUR,
The number of new sysctl is concerning high, and I don't see any documentation update yet.
Hi Paolo,
The documentation is expected to be at the end of whole AccECN patch https://github.com/L4STeam/linux-net-next/commit/03dcec1aec6aa774da4c1993b38...
Or I can move it next to this patch.
@@ -6291,9 +6294,36 @@ void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb,
if (payload_len > 0) { u8 minlen =
tcp_ecnfield_to_accecn_optfield(ecnfield);
u32 oldbytes = tp->received_ecn_bytes[ecnfield -
- 1];
tp->received_ecn_bytes[ecnfield - 1] += payload_len; tp->accecn_minlen = max_t(u8, tp->accecn_minlen, minlen);
/* Demand AccECN option at least every 2^22 bytes to
* avoid overflowing the ECN byte counters.
*/
if ((tp->received_ecn_bytes[ecnfield - 1] ^ oldbytes) &
~((1 << 22) - 1)) {
u8 opt_demand = max_t(u8, 1,
- tp->accecn_opt_demand);
tp->accecn_opt_demand = opt_demand;
}
I guess this explains the u32 values for such counters. Some comments in the previous patch could be useful.
Yes, like my previous email says, I would refer to the algorithm in AccECN draft.
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 3f3e285fc973..2e95dad66fe3 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -3451,6 +3451,7 @@ static int __net_init tcp_sk_init(struct net *net) { net->ipv4.sysctl_tcp_ecn = 2; net->ipv4.sysctl_tcp_ecn_option = 2;
net->ipv4.sysctl_tcp_ecn_option_beacon = 3; net->ipv4.sysctl_tcp_ecn_fallback = 1;
Human readable macros instead of magic numbers could help.
OK, commments will be added here.
@@ -1237,13 +1253,18 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb
if (tcp_ecn_mode_accecn(tp) && sock_net(sk)->ipv4.sysctl_tcp_ecn_option) {
int saving = opts->num_sack_blocks > 0 ? 2 : 0;
int remaining = MAX_TCP_OPTION_SPACE - size;
opts->ecn_bytes = tp->received_ecn_bytes;
size += tcp_options_fit_accecn(opts, tp->accecn_minlen,
remaining,
saving);
if (sock_net(sk)->ipv4.sysctl_tcp_ecn_option >= 2 ||
tp->accecn_opt_demand ||
tcp_accecn_option_beacon_check(sk)) {
Why a nested if here and just not expanding the existing one?
Sure, will merge them.
Chia-Yu
/P
On Mon, 5 May 2025, Chia-Yu Chang (Nokia) wrote:
-----Original Message----- From: Paolo Abeni pabeni@redhat.com Sent: Tuesday, April 29, 2025 2:10 PM To: Chia-Yu Chang (Nokia) chia-yu.chang@nokia-bell-labs.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ij@kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Subject: Re: [PATCH v5 net-next 10/15] tcp: accecn: AccECN option send control
CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
From: Ilpo Järvinen ij@kernel.org
Instead of sending the option in every ACK, limit sending to those ACKs where the option is necessary:
- Handshake
- "Change-triggered ACK" + the ACK following it. The 2nd ACK is necessary to unambiguously indicate which of the ECN byte counters in increasing. The first ACK has two counters increasing due to the ecnfield edge.
- ACKs with CE to allow CEP delta validations to take advantage of the option.
- Force option to be sent every at least once per 2^22 bytes. The check is done using the bit edges of the byte counters (avoids need for extra variables).
- AccECN option beacon to send a few times per RTT even if nothing in the ECN state requires that. The default is 3 times per RTT, and its period can be set via sysctl_tcp_ecn_option_beacon.
Signed-off-by: Ilpo Järvinen ij@kernel.org Co-developed-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com
include/linux/tcp.h | 3 +++ include/net/netns/ipv4.h | 1 + include/net/tcp.h | 1 + net/ipv4/sysctl_net_ipv4.c | 9 ++++++++ net/ipv4/tcp.c | 5 ++++- net/ipv4/tcp_input.c | 36 +++++++++++++++++++++++++++++++- net/ipv4/tcp_ipv4.c | 1 + net/ipv4/tcp_minisocks.c | 2 ++ net/ipv4/tcp_output.c | 42 ++++++++++++++++++++++++++++++-------- 9 files changed, 90 insertions(+), 10 deletions(-)
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 3f3e285fc973..2e95dad66fe3 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -3451,6 +3451,7 @@ static int __net_init tcp_sk_init(struct net *net) { net->ipv4.sysctl_tcp_ecn = 2; net->ipv4.sysctl_tcp_ecn_option = 2;
net->ipv4.sysctl_tcp_ecn_option_beacon = 3; net->ipv4.sysctl_tcp_ecn_fallback = 1;
Human readable macros instead of magic numbers could help.
OK, commments will be added here.
Hi,
Using named defines to replace literals would be more useful than comments (names can be grepped for, do not fall out-of-sync with code, etc.).
@@ -1237,13 +1253,18 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb
if (tcp_ecn_mode_accecn(tp) && sock_net(sk)->ipv4.sysctl_tcp_ecn_option) {
int saving = opts->num_sack_blocks > 0 ? 2 : 0;
int remaining = MAX_TCP_OPTION_SPACE - size;
opts->ecn_bytes = tp->received_ecn_bytes;
size += tcp_options_fit_accecn(opts, tp->accecn_minlen,
remaining,
saving);
if (sock_net(sk)->ipv4.sysctl_tcp_ecn_option >= 2 ||
tp->accecn_opt_demand ||
tcp_accecn_option_beacon_check(sk)) {
Why a nested if here and just not expanding the existing one?
Sure, will merge them.
While I don't remember everything that well anymore, this might have been to reduce code churn in some later patch, so it might be worth to check it first (that patch might even fall outside of this series now that these are split into multiple chunks).
-----Original Message----- From: Ilpo Järvinen ij@kernel.org Sent: Tuesday, May 6, 2025 1:27 AM To: Chia-Yu Chang (Nokia) chia-yu.chang@nokia-bell-labs.com Cc: Paolo Abeni pabeni@redhat.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Subject: RE: [PATCH v5 net-next 10/15] tcp: accecn: AccECN option send control
CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.
On Mon, 5 May 2025, Chia-Yu Chang (Nokia) wrote:
-----Original Message----- From: Paolo Abeni pabeni@redhat.com Sent: Tuesday, April 29, 2025 2:10 PM To: Chia-Yu Chang (Nokia) chia-yu.chang@nokia-bell-labs.com; horms@kernel.org; dsahern@kernel.org; kuniyu@amazon.com; bpf@vger.kernel.org; netdev@vger.kernel.org; dave.taht@gmail.com; jhs@mojatatu.com; kuba@kernel.org; stephen@networkplumber.org; xiyou.wangcong@gmail.com; jiri@resnulli.us; davem@davemloft.net; edumazet@google.com; andrew+netdev@lunn.ch; donald.hunter@gmail.com; ast@fiberby.net; liuhangbin@gmail.com; shuah@kernel.org; linux-kselftest@vger.kernel.org; ij@kernel.org; ncardwell@google.com; Koen De Schepper (Nokia) koen.de_schepper@nokia-bell-labs.com; g.white g.white@cablelabs.com; ingemar.s.johansson@ericsson.com; mirja.kuehlewind@ericsson.com; cheshire@apple.com; rs.ietf@gmx.at; Jason_Livingood@comcast.com; vidhi_goel vidhi_goel@apple.com Subject: Re: [PATCH v5 net-next 10/15] tcp: accecn: AccECN option send control
CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
From: Ilpo Järvinen ij@kernel.org
Instead of sending the option in every ACK, limit sending to those ACKs where the option is necessary:
- Handshake
- "Change-triggered ACK" + the ACK following it. The 2nd ACK is necessary to unambiguously indicate which of the ECN byte counters in increasing. The first ACK has two counters increasing due to the ecnfield edge.
- ACKs with CE to allow CEP delta validations to take advantage of the option.
- Force option to be sent every at least once per 2^22 bytes. The check is done using the bit edges of the byte counters (avoids need for extra variables).
- AccECN option beacon to send a few times per RTT even if nothing in the ECN state requires that. The default is 3 times per RTT, and its period can be set via sysctl_tcp_ecn_option_beacon.
Signed-off-by: Ilpo Järvinen ij@kernel.org Co-developed-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com
include/linux/tcp.h | 3 +++ include/net/netns/ipv4.h | 1 + include/net/tcp.h | 1 + net/ipv4/sysctl_net_ipv4.c | 9 ++++++++ net/ipv4/tcp.c | 5 ++++- net/ipv4/tcp_input.c | 36 +++++++++++++++++++++++++++++++- net/ipv4/tcp_ipv4.c | 1 + net/ipv4/tcp_minisocks.c | 2 ++ net/ipv4/tcp_output.c | 42 ++++++++++++++++++++++++++++++-------- 9 files changed, 90 insertions(+), 10 deletions(-)
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 3f3e285fc973..2e95dad66fe3 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -3451,6 +3451,7 @@ static int __net_init tcp_sk_init(struct net *net) { net->ipv4.sysctl_tcp_ecn = 2; net->ipv4.sysctl_tcp_ecn_option = 2;
net->ipv4.sysctl_tcp_ecn_option_beacon = 3; net->ipv4.sysctl_tcp_ecn_fallback = 1;
Human readable macros instead of magic numbers could help.
OK, commments will be added here.
Hi,
Using named defines to replace literals would be more useful than comments (names can be grepped for, do not fall out-of-sync with code, etc.).
@@ -1237,13 +1253,18 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb
if (tcp_ecn_mode_accecn(tp) && sock_net(sk)->ipv4.sysctl_tcp_ecn_option) {
int saving = opts->num_sack_blocks > 0 ? 2 : 0;
int remaining = MAX_TCP_OPTION_SPACE - size;
opts->ecn_bytes = tp->received_ecn_bytes;
size += tcp_options_fit_accecn(opts, tp->accecn_minlen,
remaining,
saving);
if (sock_net(sk)->ipv4.sysctl_tcp_ecn_option >= 2 ||
tp->accecn_opt_demand ||
tcp_accecn_option_beacon_check(sk)) {
Why a nested if here and just not expanding the existing one?
Sure, will merge them.
While I don't remember everything that well anymore, this might have been to reduce code churn in some later patch, so it might be worth to check it first (that patch might even fall outside of this series now that these are split into multiple chunks).
-- i.
Hi Ilpo,
Thanks for this raised point. And I've checked that the condition is changed at the latter patches ("tcp: accecn: AccECN option failure handling"), but with the similar nested if. So, I would try to merge it at the next version, and will check packetdrill for sure.
Chia-Yu
From: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com
AccECN option may fail in various way, handle these: - Remove option from SYN/ACK rexmits to handle blackholes - If no option arrives in SYN/ACK, assume Option is not usable - If an option arrives later, re-enabled - If option is zeroed, disable AccECN option processing
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- include/linux/tcp.h | 6 ++-- include/net/tcp.h | 7 +++++ net/ipv4/tcp.c | 1 + net/ipv4/tcp_input.c | 67 +++++++++++++++++++++++++++++++++++----- net/ipv4/tcp_minisocks.c | 38 +++++++++++++++++++++++ net/ipv4/tcp_output.c | 7 +++-- 6 files changed, 115 insertions(+), 11 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index acb0727855f8..b93bf1785008 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -160,7 +160,8 @@ struct tcp_request_sock { u8 accecn_ok : 1, syn_ect_snt: 2, syn_ect_rcv: 2; - u8 accecn_fail_mode:4; + u8 accecn_fail_mode:4, + saw_accecn_opt :2; u32 txhash; u32 rcv_isn; u32 snt_isn; @@ -391,7 +392,8 @@ struct tcp_sock { syn_ect_snt:2, /* AccECN ECT memory, only */ syn_ect_rcv:2, /* ... needed durign 3WHS + first seqno */ wait_third_ack:1; /* Wait 3rd ACK in simultaneous open */ - u8 accecn_fail_mode:4; /* AccECN failure handling */ + u8 accecn_fail_mode:4, /* AccECN failure handling */ + saw_accecn_opt:2; /* An AccECN option was seen */ u8 thin_lto : 1,/* Use linear timeouts for thin streams */ fastopen_connect:1, /* FASTOPEN_CONNECT sockopt */ fastopen_no_cookie:1, /* Allow send/recv SYN+data without a cookie */ diff --git a/include/net/tcp.h b/include/net/tcp.h index 3ee5b52441e3..0ade2873b84e 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -276,6 +276,12 @@ static inline void tcp_accecn_fail_mode_set(struct tcp_sock *tp, u8 mode) tp->accecn_fail_mode |= mode; }
+/* tp->saw_accecn_opt states */ +#define TCP_ACCECN_OPT_NOT_SEEN 0x0 +#define TCP_ACCECN_OPT_EMPTY_SEEN 0x1 +#define TCP_ACCECN_OPT_COUNTER_SEEN 0x2 +#define TCP_ACCECN_OPT_FAIL_SEEN 0x3 + /* Flags in tp->nonagle */ #define TCP_NAGLE_OFF 1 /* Nagle's algo is disabled */ #define TCP_NAGLE_CORK 2 /* Socket is corked */ @@ -477,6 +483,7 @@ static inline int tcp_accecn_extract_syn_ect(u8 ace) bool tcp_accecn_validate_syn_feedback(struct sock *sk, u8 ace, u8 sent_ect); void tcp_accecn_third_ack(struct sock *sk, const struct sk_buff *skb, u8 syn_ect_snt); +u8 tcp_accecn_option_init(const struct sk_buff *skb, u8 opt_offset); void tcp_ecn_received_counters(struct sock *sk, const struct sk_buff *skb, u32 payload_len);
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index a712643a934e..03c205eaabe5 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -3367,6 +3367,7 @@ int tcp_disconnect(struct sock *sk, int flags) tp->delivered_ce = 0; tp->wait_third_ack = 0; tp->accecn_fail_mode = 0; + tp->saw_accecn_opt = TCP_ACCECN_OPT_NOT_SEEN; tcp_accecn_init_counters(tp); tp->prev_ecnfield = 0; tp->accecn_opt_tstamp = 0; diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 1e8e49881ca4..8f1e10530880 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -446,8 +446,8 @@ bool tcp_accecn_validate_syn_feedback(struct sock *sk, u8 ace, u8 sent_ect) }
/* See Table 2 of the AccECN draft */ -static void tcp_ecn_rcv_synack(struct sock *sk, const struct tcphdr *th, - u8 ip_dsfield) +static void tcp_ecn_rcv_synack(struct sock *sk, const struct sk_buff *skb, + const struct tcphdr *th, u8 ip_dsfield) { struct tcp_sock *tp = tcp_sk(sk); u8 ace = tcp_accecn_ace(th); @@ -466,7 +466,19 @@ static void tcp_ecn_rcv_synack(struct sock *sk, const struct tcphdr *th, default: tcp_ecn_mode_set(tp, TCP_ECN_MODE_ACCECN); tp->syn_ect_rcv = ip_dsfield & INET_ECN_MASK; - tp->accecn_opt_demand = 2; + if (tp->rx_opt.accecn && + tp->saw_accecn_opt < TCP_ACCECN_OPT_COUNTER_SEEN) { + u8 saw_opt = tcp_accecn_option_init(skb, + tp->rx_opt.accecn); + + tp->saw_accecn_opt = saw_opt; + if (tp->saw_accecn_opt == TCP_ACCECN_OPT_FAIL_SEEN) { + u8 fail_mode = TCP_ACCECN_OPT_FAIL_RECV; + + tcp_accecn_fail_mode_set(tp, fail_mode); + } + tp->accecn_opt_demand = 2; + } if (INET_ECN_is_ce(ip_dsfield) && tcp_accecn_validate_syn_feedback(sk, ace, tp->syn_ect_snt)) { @@ -586,7 +598,23 @@ static bool tcp_accecn_process_option(struct tcp_sock *tp, bool order1, res; unsigned int i;
+ if (tcp_accecn_opt_fail_recv(tp)) + return false; + if (!(flag & FLAG_SLOWPATH) || !tp->rx_opt.accecn) { + if (!tp->saw_accecn_opt) { + /* Too late to enable after this point due to + * potential counter wraps + */ + if (tp->bytes_sent >= (1 << 23) - 1) { + u8 fail_mode = TCP_ACCECN_OPT_FAIL_RECV; + + tp->saw_accecn_opt = TCP_ACCECN_OPT_FAIL_SEEN; + tcp_accecn_fail_mode_set(tp, fail_mode); + } + return false; + } + if (estimate_ecnfield) { u8 ecnfield = estimate_ecnfield - 1;
@@ -602,6 +630,13 @@ static bool tcp_accecn_process_option(struct tcp_sock *tp, order1 = (ptr[0] == TCPOPT_ACCECN1); ptr += 2;
+ if (tp->saw_accecn_opt < TCP_ACCECN_OPT_COUNTER_SEEN) { + tp->saw_accecn_opt = tcp_accecn_option_init(skb, + tp->rx_opt.accecn); + if (tp->saw_accecn_opt == TCP_ACCECN_OPT_FAIL_SEEN) + tcp_accecn_fail_mode_set(tp, TCP_ACCECN_OPT_FAIL_RECV); + } + res = !!estimate_ecnfield; for (i = 0; i < 3; i++) { if (optlen >= TCPOLEN_ACCECN_PERFIELD) { @@ -6457,10 +6492,25 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb, */ if (th->syn) { if (tcp_ecn_mode_accecn(tp)) { - u8 opt_demand = max_t(u8, 1, tp->accecn_opt_demand); - send_accecn_reflector = true; - tp->accecn_opt_demand = opt_demand; + if (tp->rx_opt.accecn && + tp->saw_accecn_opt < TCP_ACCECN_OPT_COUNTER_SEEN) { + u8 offset = tp->rx_opt.accecn; + u8 opt_demand; + u8 saw_opt; + + saw_opt = tcp_accecn_option_init(skb, offset); + tp->saw_accecn_opt = saw_opt; + if (tp->saw_accecn_opt == + TCP_ACCECN_OPT_FAIL_SEEN) { + u8 fail_mode = TCP_ACCECN_OPT_FAIL_RECV; + + tcp_accecn_fail_mode_set(tp, fail_mode); + } + opt_demand = max_t(u8, 1, + tp->accecn_opt_demand); + tp->accecn_opt_demand = opt_demand; + } } if (sk->sk_state == TCP_SYN_RECV && sk->sk_socket && th->ack && TCP_SKB_CB(skb)->seq + 1 == TCP_SKB_CB(skb)->end_seq && @@ -6954,7 +7004,8 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb, */
if (tcp_ecn_mode_any(tp)) - tcp_ecn_rcv_synack(sk, th, TCP_SKB_CB(skb)->ip_dsfield); + tcp_ecn_rcv_synack(sk, skb, th, + TCP_SKB_CB(skb)->ip_dsfield);
tcp_init_wl(tp, TCP_SKB_CB(skb)->seq); tcp_try_undo_spurious_syn(sk); @@ -7531,6 +7582,8 @@ static void tcp_openreq_init(struct request_sock *req, tcp_rsk(req)->snt_tsval_first = 0; tcp_rsk(req)->last_oow_ack_time = 0; tcp_rsk(req)->accecn_ok = 0; + tcp_rsk(req)->saw_accecn_opt = TCP_ACCECN_OPT_NOT_SEEN; + tcp_rsk(req)->accecn_fail_mode = 0; tcp_rsk(req)->syn_ect_rcv = 0; tcp_rsk(req)->syn_ect_snt = 0; req->mss = rx_opt->mss_clamp; diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index e0f2bd2cee9e..8bb4953fc8bd 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -501,6 +501,7 @@ static void tcp_ecn_openreq_child(struct sock *sk, tcp_ecn_mode_set(tp, TCP_ECN_MODE_ACCECN); tp->syn_ect_snt = treq->syn_ect_snt; tcp_accecn_third_ack(sk, skb, treq->syn_ect_snt); + tp->saw_accecn_opt = treq->saw_accecn_opt; tp->prev_ecnfield = treq->syn_ect_rcv; tp->accecn_opt_demand = 1; tcp_ecn_received_counters(sk, skb, skb->len - th->doff * 4); @@ -555,6 +556,30 @@ static void smc_check_reset_syn_req(const struct tcp_sock *oldtp, #endif }
+u8 tcp_accecn_option_init(const struct sk_buff *skb, u8 opt_offset) +{ + unsigned char *ptr = skb_transport_header(skb) + opt_offset; + unsigned int optlen = ptr[1] - 2; + + WARN_ON_ONCE(ptr[0] != TCPOPT_ACCECN0 && ptr[0] != TCPOPT_ACCECN1); + ptr += 2; + + /* Detect option zeroing: an AccECN connection "MAY check that the + * initial value of the EE0B field or the EE1B field is non-zero" + */ + if (optlen < TCPOLEN_ACCECN_PERFIELD) + return TCP_ACCECN_OPT_EMPTY_SEEN; + if (get_unaligned_be24(ptr) == 0) + return TCP_ACCECN_OPT_FAIL_SEEN; + if (optlen < TCPOLEN_ACCECN_PERFIELD * 3) + return TCP_ACCECN_OPT_COUNTER_SEEN; + ptr += TCPOLEN_ACCECN_PERFIELD * 2; + if (get_unaligned_be24(ptr) == 0) + return TCP_ACCECN_OPT_FAIL_SEEN; + + return TCP_ACCECN_OPT_COUNTER_SEEN; +} + /* This is not only more efficient than what we used to do, it eliminates * a lot of code duplication between IPv4/IPv6 SYN recv processing. -DaveM * @@ -716,6 +741,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb, bool own_req;
tmp_opt.saw_tstamp = 0; + tmp_opt.accecn = 0; if (th->doff > (sizeof(struct tcphdr)>>2)) { tcp_parse_options(sock_net(sk), skb, &tmp_opt, 0, NULL);
@@ -893,6 +919,18 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb, if (!(flg & TCP_FLAG_ACK)) return NULL;
+ if (tcp_rsk(req)->accecn_ok && tmp_opt.accecn && + tcp_rsk(req)->saw_accecn_opt < TCP_ACCECN_OPT_COUNTER_SEEN) { + u8 saw_opt = tcp_accecn_option_init(skb, tmp_opt.accecn); + + tcp_rsk(req)->saw_accecn_opt = saw_opt; + if (tcp_rsk(req)->saw_accecn_opt == TCP_ACCECN_OPT_FAIL_SEEN) { + u8 fail_mode = TCP_ACCECN_OPT_FAIL_RECV; + + tcp_rsk(req)->accecn_fail_mode |= fail_mode; + } + } + /* For Fast Open no more processing is needed (sk is the * child socket). */ diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index a76061dc4e5f..8e1535635aab 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1085,6 +1085,7 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb, /* Simultaneous open SYN/ACK needs AccECN option but not SYN */ if (unlikely((TCP_SKB_CB(skb)->tcp_flags & TCPHDR_ACK) && tcp_ecn_mode_accecn(tp) && + inet_csk(sk)->icsk_retransmits < 2 && sock_net(sk)->ipv4.sysctl_tcp_ecn_option && remaining >= TCPOLEN_ACCECN_BASE)) { u32 saving = tcp_synack_options_combine_saving(opts); @@ -1174,7 +1175,7 @@ static unsigned int tcp_synack_options(const struct sock *sk, smc_set_option_cond(tcp_sk(sk), ireq, opts, &remaining);
if (treq->accecn_ok && sock_net(sk)->ipv4.sysctl_tcp_ecn_option && - remaining >= TCPOLEN_ACCECN_BASE) { + req->num_timeout < 1 && remaining >= TCPOLEN_ACCECN_BASE) { u32 saving = tcp_synack_options_combine_saving(opts);
opts->ecn_bytes = synack_ecn_bytes; @@ -1252,7 +1253,9 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb }
if (tcp_ecn_mode_accecn(tp) && - sock_net(sk)->ipv4.sysctl_tcp_ecn_option) { + sock_net(sk)->ipv4.sysctl_tcp_ecn_option && + tp->saw_accecn_opt && + !tcp_accecn_opt_fail_send(tp)) { if (sock_net(sk)->ipv4.sysctl_tcp_ecn_option >= 2 || tp->accecn_opt_demand || tcp_accecn_option_beacon_check(sk)) {
On 4/22/25 5:35 PM, chia-yu.chang@nokia-bell-labs.com wrote:
@@ -555,6 +556,30 @@ static void smc_check_reset_syn_req(const struct tcp_sock *oldtp, #endif } +u8 tcp_accecn_option_init(const struct sk_buff *skb, u8 opt_offset) +{
- unsigned char *ptr = skb_transport_header(skb) + opt_offset;
- unsigned int optlen = ptr[1] - 2;
- WARN_ON_ONCE(ptr[0] != TCPOPT_ACCECN0 && ptr[0] != TCPOPT_ACCECN1);
This warn shoul be dropped, too.
/P
From: Ilpo Järvinen ij@kernel.org
The heuristic algorithm from draft-11 Appendix A.2.2 to mitigate against false ACE field overflows.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- include/net/tcp.h | 1 + net/ipv4/tcp_input.c | 18 ++++++++++++++++-- 2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h index 0ade2873b84e..3ceed4792d13 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -244,6 +244,7 @@ static_assert((1 << ATO_BITS) > TCP_DELACK_MAX); #define TCP_ACCECN_MAXSIZE (TCPOLEN_ACCECN_BASE + \ TCPOLEN_ACCECN_PERFIELD * \ TCP_ACCECN_NUMFIELDS) +#define TCP_ACCECN_SAFETY_SHIFT 1 /* SAFETY_FACTOR in accecn draft */
/* tp->accecn_fail_mode */ #define TCP_ACCECN_ACE_FAIL_SEND BIT(0) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 8f1e10530880..54f798161d14 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -694,16 +694,19 @@ static u32 __tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, u32 delivered_pkts, u32 delivered_bytes, int flag) { + u32 old_ceb = tcp_sk(sk)->delivered_ecn_bytes[INET_ECN_CE - 1]; const struct tcphdr *th = tcp_hdr(skb); struct tcp_sock *tp = tcp_sk(sk); - u32 delta, safe_delta; + u32 delta, safe_delta, d_ceb; + bool opt_deltas_valid; u32 corrected_ace;
/* Reordered ACK or uncertain due to lack of data to send and ts */ if (!(flag & (FLAG_FORWARD_PROGRESS | FLAG_TS_PROGRESS))) return 0;
- tcp_accecn_process_option(tp, skb, delivered_bytes, flag); + opt_deltas_valid = tcp_accecn_process_option(tp, skb, + delivered_bytes, flag);
if (!(flag & FLAG_SLOWPATH)) { /* AccECN counter might overflow on large ACKs */ @@ -726,6 +729,17 @@ static u32 __tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, safe_delta = delivered_pkts - ((delivered_pkts - delta) & TCP_ACCECN_CEP_ACE_MASK);
+ if (opt_deltas_valid) { + d_ceb = tp->delivered_ecn_bytes[INET_ECN_CE - 1] - old_ceb; + if (!d_ceb) + return delta; + if (d_ceb > delta * tp->mss_cache) + return safe_delta; + if (d_ceb < + safe_delta * tp->mss_cache >> TCP_ACCECN_SAFETY_SHIFT) + return delta; + } + return safe_delta; }
From: Ilpo Järvinen ij@kernel.org
Armed with ceb delta from option, delivered bytes, and delivered packets it is possible to estimate how many times ACE field wrapped.
This calculation is necessary only if more than one wrap is possible. Without SACK, delivered bytes and packets are not always trustworthy in which case TCP falls back to the simpler no-or-all wraps ceb algorithm.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- net/ipv4/tcp_input.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 54f798161d14..c6dac3c2d47a 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -733,6 +733,24 @@ static u32 __tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, d_ceb = tp->delivered_ecn_bytes[INET_ECN_CE - 1] - old_ceb; if (!d_ceb) return delta; + + if ((delivered_pkts >= (TCP_ACCECN_CEP_ACE_MASK + 1) * 2) && + (tcp_is_sack(tp) || + ((1 << inet_csk(sk)->icsk_ca_state) & + (TCPF_CA_Open | TCPF_CA_CWR)))) { + u32 est_d_cep; + + if (delivered_bytes <= d_ceb) + return safe_delta; + + est_d_cep = DIV_ROUND_UP_ULL((u64)d_ceb * + delivered_pkts, + delivered_bytes); + return min(safe_delta, + delta + + (est_d_cep & ~TCP_ACCECN_CEP_ACE_MASK)); + } + if (d_ceb > delta * tp->mss_cache) return safe_delta; if (d_ceb <
From: Ilpo Järvinen ij@kernel.org
As SACK blocks tend to eat all option space when there are many holes, it is useful to compromise on sending many SACK blocks in every ACK and try to fit AccECN option there by reduction the number of SACK blocks. But never go below two SACK blocks because of AccECN option.
As AccECN option is often not put to every ACK, the space hijack is usually only temporary.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- net/ipv4/tcp_output.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 8e1535635aab..936ec8788c8e 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -981,8 +981,21 @@ static int tcp_options_fit_accecn(struct tcp_out_options *opts, int required, opts->num_accecn_fields--; size -= TCPOLEN_ACCECN_PERFIELD; } - if (opts->num_accecn_fields < required) + if (opts->num_accecn_fields < required) { + if (opts->num_sack_blocks > 2) { + /* Try to fit the option by removing one SACK block */ + opts->num_sack_blocks--; + size = tcp_options_fit_accecn(opts, required, + remaining + + TCPOLEN_SACK_PERBLOCK, + max_combine_saving); + if (opts->options & OPTION_ACCECN) + return size - TCPOLEN_SACK_PERBLOCK; + + opts->num_sack_blocks++; + } return 0; + }
opts->options |= OPTION_ACCECN; return size;
From: Ilpo Järvinen ij@kernel.org
Add newly acked pkts EWMA. When ACK thinning occurs, select between safer and unsafe cep delta in AccECN processing based on it. If the packets ACKed per ACK tends to be large, don't conservatively assume ACE field overflow.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com --- include/linux/tcp.h | 1 + net/ipv4/tcp.c | 4 +++- net/ipv4/tcp_input.c | 20 +++++++++++++++++++- 3 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index b93bf1785008..99ca0b8435c8 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -315,6 +315,7 @@ struct tcp_sock { est_ecnfield:2;/* ECN field for AccECN delivered estimates */ u32 app_limited; /* limited until "delivered" reaches this val */ u64 accecn_opt_tstamp; /* Last AccECN option sent timestamp */ + u16 pkts_acked_ewma;/* Pkts acked EWMA for AccECN cep heuristic */ u32 rcv_wnd; /* Current receiver window */ /* * Options received (usually on last packet, some only on SYN packets). diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 03c205eaabe5..7af22c4615e6 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -3371,6 +3371,7 @@ int tcp_disconnect(struct sock *sk, int flags) tcp_accecn_init_counters(tp); tp->prev_ecnfield = 0; tp->accecn_opt_tstamp = 0; + tp->pkts_acked_ewma = 0; if (icsk->icsk_ca_initialized && icsk->icsk_ca_ops->release) icsk->icsk_ca_ops->release(sk); memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv)); @@ -5111,13 +5112,14 @@ static void __init tcp_struct_check(void) CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, received_ecn_bytes); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, app_limited); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, accecn_opt_tstamp); + CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, pkts_acked_ewma); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, rcv_wnd); CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_txrx, rx_opt);
/* 32bit arches with 8byte alignment on u64 fields might need padding * before tcp_clock_cache. */ - CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 130 + 6); + CACHELINE_ASSERT_GROUP_SIZE(struct tcp_sock, tcp_sock_write_txrx, 132 + 8);
/* RX read-write hotpath cache lines */ CACHELINE_ASSERT_GROUP_MEMBER(struct tcp_sock, tcp_sock_write_rx, bytes_received); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index c6dac3c2d47a..5bdd82d3c201 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -689,6 +689,10 @@ static void tcp_count_delivered(struct tcp_sock *tp, u32 delivered, tcp_count_delivered_ce(tp, delivered); }
+#define PKTS_ACKED_WEIGHT 6 +#define PKTS_ACKED_PREC 6 +#define ACK_COMP_THRESH 4 + /* Returns the ECN CE delta */ static u32 __tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, u32 delivered_pkts, u32 delivered_bytes, @@ -708,6 +712,19 @@ static u32 __tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, opt_deltas_valid = tcp_accecn_process_option(tp, skb, delivered_bytes, flag);
+ if (delivered_pkts) { + if (!tp->pkts_acked_ewma) { + tp->pkts_acked_ewma = delivered_pkts << PKTS_ACKED_PREC; + } else { + u32 ewma = tp->pkts_acked_ewma; + + ewma = (((ewma << PKTS_ACKED_WEIGHT) - ewma) + + (delivered_pkts << PKTS_ACKED_PREC)) >> + PKTS_ACKED_WEIGHT; + tp->pkts_acked_ewma = min_t(u32, ewma, 0xFFFFU); + } + } + if (!(flag & FLAG_SLOWPATH)) { /* AccECN counter might overflow on large ACKs */ if (delivered_pkts <= TCP_ACCECN_CEP_ACE_MASK) @@ -756,7 +773,8 @@ static u32 __tcp_accecn_process(struct sock *sk, const struct sk_buff *skb, if (d_ceb < safe_delta * tp->mss_cache >> TCP_ACCECN_SAFETY_SHIFT) return delta; - } + } else if (tp->pkts_acked_ewma > (ACK_COMP_THRESH << PKTS_ACKED_PREC)) + return delta;
return safe_delta; }
On 4/22/25 5:36 PM, chia-yu.chang@nokia-bell-labs.com wrote:
From: Ilpo Järvinen ij@kernel.org
Add newly acked pkts EWMA. When ACK thinning occurs, select between safer and unsafe cep delta in AccECN processing based on it. If the packets ACKed per ACK tends to be large, don't conservatively assume ACE field overflow.
Signed-off-by: Ilpo Järvinen ij@kernel.org Signed-off-by: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com
include/linux/tcp.h | 1 + net/ipv4/tcp.c | 4 +++- net/ipv4/tcp_input.c | 20 +++++++++++++++++++- 3 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index b93bf1785008..99ca0b8435c8 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -315,6 +315,7 @@ struct tcp_sock { est_ecnfield:2;/* ECN field for AccECN delivered estimates */ u32 app_limited; /* limited until "delivered" reaches this val */ u64 accecn_opt_tstamp; /* Last AccECN option sent timestamp */
- u16 pkts_acked_ewma;/* Pkts acked EWMA for AccECN cep heuristic */
It looks like this field is accessed only on the RX path and does not belong to this cacheline group.
/P
On Tue, Apr 22, 2025 at 05:35:47PM +0200, chia-yu.chang@nokia-bell-labs.com wrote:
From: Chia-Yu Chang chia-yu.chang@nokia-bell-labs.com
Hello,
Plese find the v5:
v5 (22-Apr-2025)
- Further fix for 32-bit ARM alignment in tcp.c (Simon Horman horms@kernel.org)
v4 (18-Apr-2025)
- Fix 32-bit ARM assertion for alignment requirement (Simon Horman horms@kernel.org)
Thanks, I confirm that v6 appears to be clear wrt these build checks for 32-bit ARM assertion for alignment requirements.
...
On Tue, 22 Apr 2025 17:35:47 +0200 chia-yu.chang@nokia-bell-labs.com wrote:
Chia-Yu Chang (1): tcp: accecn: AccECN option failure handling
Ilpo Järvinen (14): tcp: reorganize SYN ECN code tcp: fast path functions later tcp: AccECN core tcp: accecn: AccECN negotiation tcp: accecn: add AccECN rx byte counters tcp: accecn: AccECN needs to know delivered bytes tcp: allow embedding leftover into option padding tcp: sack option handling improvements tcp: accecn: AccECN option tcp: accecn: AccECN option send control tcp: accecn: AccECN option ceb/cep heuristic tcp: accecn: AccECN ACE field multi-wrap heuristic tcp: accecn: try to fit AccECN option with SACK tcp: try to avoid safer when ACKs are thinned
Hi Neal! Could you pass your judgment on these? Given Eric is AFK / busy.
On Fri, Apr 25, 2025 at 8:32 PM Jakub Kicinski kuba@kernel.org wrote:
On Tue, 22 Apr 2025 17:35:47 +0200 chia-yu.chang@nokia-bell-labs.com wrote:
Chia-Yu Chang (1): tcp: accecn: AccECN option failure handling
Ilpo Järvinen (14): tcp: reorganize SYN ECN code tcp: fast path functions later tcp: AccECN core tcp: accecn: AccECN negotiation tcp: accecn: add AccECN rx byte counters tcp: accecn: AccECN needs to know delivered bytes tcp: allow embedding leftover into option padding tcp: sack option handling improvements tcp: accecn: AccECN option tcp: accecn: AccECN option send control tcp: accecn: AccECN option ceb/cep heuristic tcp: accecn: AccECN ACE field multi-wrap heuristic tcp: accecn: try to fit AccECN option with SACK tcp: try to avoid safer when ACKs are thinned
Hi Neal! Could you pass your judgment on these? Given Eric is AFK / busy.
Hi Jakub,
I'm a bit overloaded at the moment, but will try to get to these reviews ASAP.
Thanks! neal
linux-kselftest-mirror@lists.linaro.org