The first patch of this series is an improvement to the existing syncookie BPF helper.
The two other patches add new functionality that allows XDP to accelerate iptables synproxy.
v1 of this series [1] used to include a patch that exposed conntrack lookup to BPF using stable helpers. It was superseded by series [2] by Kumar Kartikeya Dwivedi, which implements this functionality using unstable helpers.
The second patch adds new helpers to issue and check SYN cookies without binding to a socket, which is useful in the synproxy scenario.
The third patch adds a selftest, which consists of a script, an XDP program and a userspace control application. The XDP program uses socketless SYN cookie helpers and queries conntrack status instead of socket status. The userspace control application allows to tune parameters of the XDP program. This program also serves as a minimal example of usage of the new functionality.
The draft of the new functionality was presented on Netdev 0x15 [3].
v2 changes:
Split into two series, submitted bugfixes to bpf, dropped the conntrack patches, implemented the timestamp cookie in BPF using bpf_loop, dropped the timestamp cookie patch.
[1]: https://lore.kernel.org/bpf/20211020095815.GJ28644@breakpoint.cc/t/ [2]: https://lore.kernel.org/bpf/20220114163953.1455836-1-memxor@gmail.com/ [3]: https://netdevconf.info/0x15/session.html?Accelerating-synproxy-with-XDP
Maxim Mikityanskiy (3): bpf: Make errors of bpf_tcp_check_syncookie distinguishable bpf: Add helpers to issue and check SYN cookies in XDP bpf: Add selftests for raw syncookie helpers
include/net/tcp.h | 1 + include/uapi/linux/bpf.h | 75 +- net/core/filter.c | 128 ++- net/ipv4/tcp_input.c | 3 +- tools/include/uapi/linux/bpf.h | 75 +- tools/testing/selftests/bpf/.gitignore | 1 + tools/testing/selftests/bpf/Makefile | 5 +- .../selftests/bpf/progs/xdp_synproxy_kern.c | 743 ++++++++++++++++++ .../selftests/bpf/test_xdp_synproxy.sh | 71 ++ tools/testing/selftests/bpf/xdp_synproxy.c | 418 ++++++++++ 10 files changed, 1510 insertions(+), 10 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c create mode 100755 tools/testing/selftests/bpf/test_xdp_synproxy.sh create mode 100644 tools/testing/selftests/bpf/xdp_synproxy.c
bpf_tcp_check_syncookie returns ambiguous error codes in some cases. The list below shows various error conditions and matching error codes:
1. NULL socket: -EINVAL.
2. Invalid packet: -EINVAL, -ENOENT.
3. Bad cookie: -ENOENT.
4. Cookies are not in use: -EINVAL, -ENOENT.
5. Good cookie: 0.
As we see, the same error code may correspond to multiple error conditions, making them undistinguishable, and at the same time one error condition may return different codes, although it's typically handled in the same way.
This patch reassigns error codes of bpf_tcp_check_syncookie and documents them:
1. Invalid packet or NULL socket: -EINVAL;
2. Bad cookie: -EACCES.
3. Cookies are not in use: -ENOENT.
4. Good cookie: 0.
This change allows XDP programs to make smarter decisions based on error code, because different error conditions are now easily distinguishable.
Backward compatibility shouldn't suffer because of these reasons:
1. The specific error codes weren't documented. The behavior that used to be documented (0 is good cookie, negative values are errors) still holds. Anyone who relied on implementation details should have understood the risks.
2. Two known usecases (classification of ACKs with cookies that initial new connections, SYN flood protection) take decisions which don't depend on specific error codes:
Traffic classification: ACK packet is new, error == 0: classify as NEW. ACK packet is new, error < 0: classify as INVALID.
SYN flood protection: ACK packet is new, error == 0: good cookie, XDP_PASS. ACK packet is new, error < 0: bad cookie, XDP_DROP.
As Lorenz Bauer confirms, their implementation of traffic classifier won't break, as well as the kernel selftests.
3. It's hard to imagine that old error codes could be used for any useful decisions.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com --- include/uapi/linux/bpf.h | 18 ++++++++++++++++-- net/core/filter.c | 6 +++--- tools/include/uapi/linux/bpf.h | 18 ++++++++++++++++-- 3 files changed, 35 insertions(+), 7 deletions(-)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 16a7574292a5..4d2d4a09bf25 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -3575,8 +3575,22 @@ union bpf_attr { * *th* points to the start of the TCP header, while *th_len* * contains **sizeof**\ (**struct tcphdr**). * Return - * 0 if *iph* and *th* are a valid SYN cookie ACK, or a negative - * error otherwise. + * 0 if *iph* and *th* are a valid SYN cookie ACK. + * + * On failure, the returned value is one of the following: + * + * **-EACCES** if the SYN cookie is not valid. + * + * **-EINVAL** if the packet or input arguments are invalid. + * + * **-ENOENT** if SYN cookies are not issued (no SYN flood, or SYN + * cookies are disabled in sysctl). + * + * **-EOPNOTSUPP** if the kernel configuration does not enable SYN + * cookies (CONFIG_SYN_COOKIES is off). + * + * **-EPROTONOSUPPORT** if the IP version is not 4 or 6 (or 6, but + * CONFIG_IPV6 is disabled). * * long bpf_sysctl_get_name(struct bpf_sysctl *ctx, char *buf, size_t buf_len, u64 flags) * Description diff --git a/net/core/filter.c b/net/core/filter.c index a06931c27eeb..18559b5828a3 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -6998,10 +6998,10 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len return -EINVAL;
if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies) - return -EINVAL; + return -ENOENT;
if (!th->ack || th->rst || th->syn) - return -ENOENT; + return -EINVAL;
if (tcp_synq_no_recent_overflow(sk)) return -ENOENT; @@ -7032,7 +7032,7 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len if (ret > 0) return 0;
- return -ENOENT; + return -EACCES; #else return -ENOTSUPP; #endif diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 16a7574292a5..4d2d4a09bf25 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -3575,8 +3575,22 @@ union bpf_attr { * *th* points to the start of the TCP header, while *th_len* * contains **sizeof**\ (**struct tcphdr**). * Return - * 0 if *iph* and *th* are a valid SYN cookie ACK, or a negative - * error otherwise. + * 0 if *iph* and *th* are a valid SYN cookie ACK. + * + * On failure, the returned value is one of the following: + * + * **-EACCES** if the SYN cookie is not valid. + * + * **-EINVAL** if the packet or input arguments are invalid. + * + * **-ENOENT** if SYN cookies are not issued (no SYN flood, or SYN + * cookies are disabled in sysctl). + * + * **-EOPNOTSUPP** if the kernel configuration does not enable SYN + * cookies (CONFIG_SYN_COOKIES is off). + * + * **-EPROTONOSUPPORT** if the IP version is not 4 or 6 (or 6, but + * CONFIG_IPV6 is disabled). * * long bpf_sysctl_get_name(struct bpf_sysctl *ctx, char *buf, size_t buf_len, u64 flags) * Description
Maxim Mikityanskiy wrote:
bpf_tcp_check_syncookie returns ambiguous error codes in some cases. The list below shows various error conditions and matching error codes:
NULL socket: -EINVAL.
Invalid packet: -EINVAL, -ENOENT.
Bad cookie: -ENOENT.
Cookies are not in use: -EINVAL, -ENOENT.
Good cookie: 0.
As we see, the same error code may correspond to multiple error conditions, making them undistinguishable, and at the same time one error condition may return different codes, although it's typically handled in the same way.
This patch reassigns error codes of bpf_tcp_check_syncookie and documents them:
Invalid packet or NULL socket: -EINVAL;
Bad cookie: -EACCES.
Cookies are not in use: -ENOENT.
Good cookie: 0.
This change allows XDP programs to make smarter decisions based on error code, because different error conditions are now easily distinguishable.
I'm missing the point here it still looks a bit like shuffling around of code. What do you gain, whats the real bug? Are you trying to differentiate between an overflow condition and a valid syncookie? But I don't think you said this anywhere.
At the moment EINVAL tells me somethings wrong with the input or configuration, although really any app that cares checked the sysctl flag right?
ENOENT tells me either recent overflow or cookie is invalid. If there is no '!ack || rst || syn' then I can either learn that directly from the program (why would a real program through these at the helper), but it also falls into the incorrect cookie in some sense.
Backward compatibility shouldn't suffer because of these reasons:
- The specific error codes weren't documented. The behavior that used to be documented (0 is good cookie, negative values are errors) still holds. Anyone who relied on implementation details should have understood the risks.
I'll disagree, just because a user facing API doesn't document its behavior very well doesn't mean users should some how understand the risks. Ideally we would have done better with error codes up front, but thats old history. If a user complains that this breaks a real application it would likely be reason to revert it.
At least I would remove this from the commit.
Two known usecases (classification of ACKs with cookies that initial new connections, SYN flood protection) take decisions which don't depend on specific error codes:
Traffic classification: ACK packet is new, error == 0: classify as NEW. ACK packet is new, error < 0: classify as INVALID.
SYN flood protection: ACK packet is new, error == 0: good cookie, XDP_PASS. ACK packet is new, error < 0: bad cookie, XDP_DROP.
As Lorenz Bauer confirms, their implementation of traffic classifier won't break, as well as the kernel selftests.
It's hard to imagine that old error codes could be used for any useful decisions.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com
include/uapi/linux/bpf.h | 18 ++++++++++++++++-- net/core/filter.c | 6 +++--- tools/include/uapi/linux/bpf.h | 18 ++++++++++++++++-- 3 files changed, 35 insertions(+), 7 deletions(-)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 16a7574292a5..4d2d4a09bf25 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -3575,8 +3575,22 @@ union bpf_attr {
*th* points to the start of the TCP header, while *th_len*
contains **sizeof**\ (**struct tcphdr**).
- Return
0 if *iph* and *th* are a valid SYN cookie ACK, or a negative
error otherwise.
0 if *iph* and *th* are a valid SYN cookie ACK.
On failure, the returned value is one of the following:
**-EACCES** if the SYN cookie is not valid.
**-EINVAL** if the packet or input arguments are invalid.
**-ENOENT** if SYN cookies are not issued (no SYN flood, or SYN
cookies are disabled in sysctl).
**-EOPNOTSUPP** if the kernel configuration does not enable SYN
cookies (CONFIG_SYN_COOKIES is off).
**-EPROTONOSUPPORT** if the IP version is not 4 or 6 (or 6, but
CONFIG_IPV6 is disabled).
- long bpf_sysctl_get_name(struct bpf_sysctl *ctx, char *buf, size_t buf_len, u64 flags)
- Description
diff --git a/net/core/filter.c b/net/core/filter.c index a06931c27eeb..18559b5828a3 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -6998,10 +6998,10 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len return -EINVAL; if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies)
return -EINVAL;
return -ENOENT;
I wouldn't change this.
if (!th->ack || th->rst || th->syn)
return -ENOENT;
return -EINVAL;
not sure if this is useful change and it is bpf program visible.
if (tcp_synq_no_recent_overflow(sk)) return -ENOENT; @@ -7032,7 +7032,7 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len if (ret > 0) return 0;
- return -ENOENT;
- return -EACCES;
This one might have a valid argument to differentiate between an overflow condition and an invalid cookie. But, curious what do you do with this info?
Thanks, John
On 2022-01-25 09:38, John Fastabend wrote:
Maxim Mikityanskiy wrote:
bpf_tcp_check_syncookie returns ambiguous error codes in some cases. The list below shows various error conditions and matching error codes:
NULL socket: -EINVAL.
Invalid packet: -EINVAL, -ENOENT.
Bad cookie: -ENOENT.
Cookies are not in use: -EINVAL, -ENOENT.
Good cookie: 0.
As we see, the same error code may correspond to multiple error conditions, making them undistinguishable, and at the same time one error condition may return different codes, although it's typically handled in the same way.
This patch reassigns error codes of bpf_tcp_check_syncookie and documents them:
Invalid packet or NULL socket: -EINVAL;
Bad cookie: -EACCES.
Cookies are not in use: -ENOENT.
Good cookie: 0.
This change allows XDP programs to make smarter decisions based on error code, because different error conditions are now easily distinguishable.
I'm missing the point here it still looks a bit like shuffling around of code. What do you gain, whats the real bug?
Current error codes are useless. If the caller gets a negative value, all it knows is that some error happened. I'm making different error conditions distinguishable (invalid input, bad cookie, not expecting a cookie).
Use cases could be: statistic counters, debugging, logging. For example, the kernel counts LINUX_MIB_SYNCOOKIESFAILED, which only includes the "bad cookie" case.
Another use case is replying with RST when not expecting a cookie (a new ACK packet could just mean that the server was rebooted, and it's good to tell the client that the connection is broken), but dropping packets under the flood on receiving bad cookies (to avoid wasting resources on sending replies).
Are you trying to differentiate between an overflow condition and a valid syncookie? But I don't think you said this anywhere.
I'm not sure what you mean by the "overflow condition", but a valid syncookie is easily distinguishable from other cases, it's 0.
At the moment EINVAL tells me somethings wrong with the input or configuration, although really any app that cares checked the sysctl flag right?
There is no way to check sysctl from XDP. A single program could be useful both with and without syncookies, it could determine its behavior in runtime, not to say that the sysctl option could change in runtime. The workaround you suggested will work in some cases, but other cases just won't work (there are no notification events on sysctl changes that a userspace application could monitor and pass to the XDP program; it would be not immediate anyway).
ENOENT tells me either recent overflow or cookie is invalid.
And these cases should be distinguished, as I said above.
If there is no '!ack || rst || syn' then I can either learn that directly from the program (why would a real program through these at the helper), but it also falls into the incorrect cookie in some sense.
Backward compatibility shouldn't suffer because of these reasons:
- The specific error codes weren't documented. The behavior that used to be documented (0 is good cookie, negative values are errors) still holds. Anyone who relied on implementation details should have understood the risks.
I'll disagree, just because a user facing API doesn't document its behavior very well doesn't mean users should some how understand the risks. Ideally we would have done better with error codes up front, but thats old history. If a user complains that this breaks a real application it would likely be reason to revert it.
At least I would remove this from the commit.
Two known usecases (classification of ACKs with cookies that initial new connections, SYN flood protection) take decisions which don't depend on specific error codes:
Traffic classification: ACK packet is new, error == 0: classify as NEW. ACK packet is new, error < 0: classify as INVALID.
SYN flood protection: ACK packet is new, error == 0: good cookie, XDP_PASS. ACK packet is new, error < 0: bad cookie, XDP_DROP.
As Lorenz Bauer confirms, their implementation of traffic classifier won't break, as well as the kernel selftests.
It's hard to imagine that old error codes could be used for any useful decisions.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com
include/uapi/linux/bpf.h | 18 ++++++++++++++++-- net/core/filter.c | 6 +++--- tools/include/uapi/linux/bpf.h | 18 ++++++++++++++++-- 3 files changed, 35 insertions(+), 7 deletions(-)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 16a7574292a5..4d2d4a09bf25 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -3575,8 +3575,22 @@ union bpf_attr {
*th* points to the start of the TCP header, while *th_len*
contains **sizeof**\ (**struct tcphdr**).
- Return
0 if *iph* and *th* are a valid SYN cookie ACK, or a negative
error otherwise.
0 if *iph* and *th* are a valid SYN cookie ACK.
On failure, the returned value is one of the following:
**-EACCES** if the SYN cookie is not valid.
**-EINVAL** if the packet or input arguments are invalid.
**-ENOENT** if SYN cookies are not issued (no SYN flood, or SYN
cookies are disabled in sysctl).
**-EOPNOTSUPP** if the kernel configuration does not enable SYN
cookies (CONFIG_SYN_COOKIES is off).
**-EPROTONOSUPPORT** if the IP version is not 4 or 6 (or 6, but
CONFIG_IPV6 is disabled).
- long bpf_sysctl_get_name(struct bpf_sysctl *ctx, char *buf, size_t buf_len, u64 flags)
- Description
diff --git a/net/core/filter.c b/net/core/filter.c index a06931c27eeb..18559b5828a3 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -6998,10 +6998,10 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len return -EINVAL; if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies)
return -EINVAL;
return -ENOENT;
I wouldn't change this.
if (!th->ack || th->rst || th->syn)
return -ENOENT;
return -EINVAL;
not sure if this is useful change and it is bpf program visible.
if (tcp_synq_no_recent_overflow(sk)) return -ENOENT; @@ -7032,7 +7032,7 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len if (ret > 0) return 0;
- return -ENOENT;
- return -EACCES;
This one might have a valid argument to differentiate between an overflow condition and an invalid cookie. But, curious what do you do with this info?
Thanks, John
The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an XDP program to generate SYN cookies in response to TCP SYN packets and to check those cookies upon receiving the first ACK packet (the final packet of the TCP handshake).
Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a listening socket on the local machine, which allows to use them together with synproxy to accelerate SYN cookie generation.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com --- include/net/tcp.h | 1 + include/uapi/linux/bpf.h | 57 +++++++++++++++ net/core/filter.c | 122 +++++++++++++++++++++++++++++++++ net/ipv4/tcp_input.c | 3 +- tools/include/uapi/linux/bpf.h | 57 +++++++++++++++ 5 files changed, 239 insertions(+), 1 deletion(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h index 44e442bf23f9..0aca79f7b1be 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -432,6 +432,7 @@ u16 tcp_v4_get_syncookie(struct sock *sk, struct iphdr *iph, struct tcphdr *th, u32 *cookie); u16 tcp_v6_get_syncookie(struct sock *sk, struct ipv6hdr *iph, struct tcphdr *th, u32 *cookie); +u16 tcp_parse_mss_option(const struct tcphdr *th, u16 user_mss); u16 tcp_get_syncookie_mss(struct request_sock_ops *rsk_ops, const struct tcp_request_sock_ops *af_ops, struct sock *sk, struct tcphdr *th); diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 4d2d4a09bf25..07864277297b 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -5090,6 +5090,61 @@ union bpf_attr { * associated to *xdp_md*, at *offset*. * Return * 0 on success, or a negative error in case of failure. + * + * s64 bpf_tcp_raw_gen_syncookie(void *iph, u32 iph_len, struct tcphdr *th, u32 th_len) + * Description + * Try to issue a SYN cookie for the packet with corresponding + * IP/TCP headers, *iph* and *th*, without depending on a listening + * socket. + * + * *iph* points to the start of the IPv4 or IPv6 header, while + * *iph_len* contains **sizeof**\ (**struct iphdr**) or + * **sizeof**\ (**struct ip6hdr**). + * + * *th* points to the start of the TCP header, while *th_len* + * contains the length of the TCP header (at least + * **sizeof**\ (**struct tcphdr**)). + * Return + * On success, lower 32 bits hold the generated SYN cookie in + * followed by 16 bits which hold the MSS value for that cookie, + * and the top 16 bits are unused. + * + * On failure, the returned value is one of the following: + * + * **-EINVAL** if the packet or input arguments are invalid. + * + * **-EOPNOTSUPP** if the kernel configuration does not enable SYN + * cookies (CONFIG_SYN_COOKIES is off). + * + * **-EPROTONOSUPPORT** if the IP version is not 4 or 6 (or 6, but + * CONFIG_IPV6 is disabled). + * + * int bpf_tcp_raw_check_syncookie(void *iph, u32 iph_len, struct tcphdr *th, u32 th_len) + * Description + * Check whether *iph* and *th* contain a valid SYN cookie ACK + * without depending on a listening socket. + * + * *iph* points to the start of the IPv4 or IPv6 header, while + * *iph_len* contains **sizeof**\ (**struct iphdr**) or + * **sizeof**\ (**struct ip6hdr**). + * + * *th* points to the start of the TCP header, while *th_len* + * contains the length of the TCP header (at least + * **sizeof**\ (**struct tcphdr**)). + * Return + * 0 if *iph* and *th* are a valid SYN cookie ACK. + * + * On failure, the returned value is one of the following: + * + * **-EACCES** if the SYN cookie is not valid. + * + * **-EINVAL** if the packet or input arguments are invalid. + * + * **-EOPNOTSUPP** if the kernel configuration does not enable SYN + * cookies (CONFIG_SYN_COOKIES is off). + * + * **-EPROTONOSUPPORT** if the IP version is not 4 or 6 (or 6, but + * CONFIG_IPV6 is disabled). */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -5283,6 +5338,8 @@ union bpf_attr { FN(xdp_get_buff_len), \ FN(xdp_load_bytes), \ FN(xdp_store_bytes), \ + FN(tcp_raw_gen_syncookie), \ + FN(tcp_raw_check_syncookie), \ /* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/net/core/filter.c b/net/core/filter.c index 18559b5828a3..dc90d2649a7a 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -7375,6 +7375,124 @@ static const struct bpf_func_proto bpf_sock_ops_reserve_hdr_opt_proto = { .arg3_type = ARG_ANYTHING, };
+BPF_CALL_4(bpf_tcp_raw_gen_syncookie, void *, iph, u32, iph_len, + struct tcphdr *, th, u32, th_len) +{ +#ifdef CONFIG_SYN_COOKIES + u32 cookie; + u16 mss; + + if (unlikely(th_len < sizeof(*th) || th_len != th->doff * 4)) + return -EINVAL; + + if (!th->syn || th->ack || th->fin || th->rst) + return -EINVAL; + + if (unlikely(iph_len < sizeof(struct iphdr))) + return -EINVAL; + + /* Both struct iphdr and struct ipv6hdr have the version field at the + * same offset so we can cast to the shorter header (struct iphdr). + */ + switch (((struct iphdr *)iph)->version) { + case 4: + mss = tcp_parse_mss_option(th, 0) ?: TCP_MSS_DEFAULT; + cookie = __cookie_v4_init_sequence(iph, th, &mss); + break; + +#if IS_BUILTIN(CONFIG_IPV6) + case 6: { + const u16 mss_clamp = IPV6_MIN_MTU - sizeof(struct tcphdr) - sizeof(struct ipv6hdr); + + if (unlikely(iph_len < sizeof(struct ipv6hdr))) + return -EINVAL; + + mss = tcp_parse_mss_option(th, 0) ?: mss_clamp; + cookie = __cookie_v6_init_sequence(iph, th, &mss); + break; + } +#endif /* CONFIG_IPV6 */ + + default: + return -EPROTONOSUPPORT; + } + + return cookie | ((u64)mss << 32); +#else + return -EOPNOTSUPP; +#endif /* CONFIG_SYN_COOKIES */ +} + +static const struct bpf_func_proto bpf_tcp_raw_gen_syncookie_proto = { + .func = bpf_tcp_raw_gen_syncookie, + .gpl_only = true, /* __cookie_v*_init_sequence() is GPL */ + .pkt_access = true, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_MEM, + .arg2_type = ARG_CONST_SIZE, + .arg3_type = ARG_PTR_TO_MEM, + .arg4_type = ARG_CONST_SIZE, +}; + +BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32, iph_len, + struct tcphdr *, th, u32, th_len) +{ +#ifdef CONFIG_SYN_COOKIES + u32 cookie; + int ret; + + if (unlikely(th_len < sizeof(*th))) + return -EINVAL; + + if (!th->ack || th->rst || th->syn) + return -EINVAL; + + if (unlikely(iph_len < sizeof(struct iphdr))) + return -EINVAL; + + cookie = ntohl(th->ack_seq) - 1; + + /* Both struct iphdr and struct ipv6hdr have the version field at the + * same offset so we can cast to the shorter header (struct iphdr). + */ + switch (((struct iphdr *)iph)->version) { + case 4: + ret = __cookie_v4_check((struct iphdr *)iph, th, cookie); + break; + +#if IS_BUILTIN(CONFIG_IPV6) + case 6: + if (unlikely(iph_len < sizeof(struct ipv6hdr))) + return -EINVAL; + + ret = __cookie_v6_check((struct ipv6hdr *)iph, th, cookie); + break; +#endif /* CONFIG_IPV6 */ + + default: + return -EPROTONOSUPPORT; + } + + if (ret > 0) + return 0; + + return -EACCES; +#else + return -EOPNOTSUPP; +#endif +} + +static const struct bpf_func_proto bpf_tcp_raw_check_syncookie_proto = { + .func = bpf_tcp_raw_check_syncookie, + .gpl_only = true, /* __cookie_v*_check is GPL */ + .pkt_access = true, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_MEM, + .arg2_type = ARG_CONST_SIZE, + .arg3_type = ARG_PTR_TO_MEM, + .arg4_type = ARG_CONST_SIZE, +}; + #endif /* CONFIG_INET */
bool bpf_helper_changes_pkt_data(void *func) @@ -7785,6 +7903,10 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_tcp_check_syncookie_proto; case BPF_FUNC_tcp_gen_syncookie: return &bpf_tcp_gen_syncookie_proto; + case BPF_FUNC_tcp_raw_gen_syncookie: + return &bpf_tcp_raw_gen_syncookie_proto; + case BPF_FUNC_tcp_raw_check_syncookie: + return &bpf_tcp_raw_check_syncookie_proto; #endif default: return bpf_sk_base_func_proto(func_id); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index dc49a3d551eb..a60f6aa44b02 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3961,7 +3961,7 @@ static bool smc_parse_options(const struct tcphdr *th, /* Try to parse the MSS option from the TCP header. Return 0 on failure, clamped * value on success. */ -static u16 tcp_parse_mss_option(const struct tcphdr *th, u16 user_mss) +u16 tcp_parse_mss_option(const struct tcphdr *th, u16 user_mss) { const unsigned char *ptr = (const unsigned char *)(th + 1); int length = (th->doff * 4) - sizeof(struct tcphdr); @@ -4000,6 +4000,7 @@ static u16 tcp_parse_mss_option(const struct tcphdr *th, u16 user_mss) } return mss; } +EXPORT_SYMBOL_GPL(tcp_parse_mss_option);
/* Look for tcp options. Normally only called on SYN and SYNACK packets. * But, this can also be called on packets in the established flow when diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 4d2d4a09bf25..07864277297b 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -5090,6 +5090,61 @@ union bpf_attr { * associated to *xdp_md*, at *offset*. * Return * 0 on success, or a negative error in case of failure. + * + * s64 bpf_tcp_raw_gen_syncookie(void *iph, u32 iph_len, struct tcphdr *th, u32 th_len) + * Description + * Try to issue a SYN cookie for the packet with corresponding + * IP/TCP headers, *iph* and *th*, without depending on a listening + * socket. + * + * *iph* points to the start of the IPv4 or IPv6 header, while + * *iph_len* contains **sizeof**\ (**struct iphdr**) or + * **sizeof**\ (**struct ip6hdr**). + * + * *th* points to the start of the TCP header, while *th_len* + * contains the length of the TCP header (at least + * **sizeof**\ (**struct tcphdr**)). + * Return + * On success, lower 32 bits hold the generated SYN cookie in + * followed by 16 bits which hold the MSS value for that cookie, + * and the top 16 bits are unused. + * + * On failure, the returned value is one of the following: + * + * **-EINVAL** if the packet or input arguments are invalid. + * + * **-EOPNOTSUPP** if the kernel configuration does not enable SYN + * cookies (CONFIG_SYN_COOKIES is off). + * + * **-EPROTONOSUPPORT** if the IP version is not 4 or 6 (or 6, but + * CONFIG_IPV6 is disabled). + * + * int bpf_tcp_raw_check_syncookie(void *iph, u32 iph_len, struct tcphdr *th, u32 th_len) + * Description + * Check whether *iph* and *th* contain a valid SYN cookie ACK + * without depending on a listening socket. + * + * *iph* points to the start of the IPv4 or IPv6 header, while + * *iph_len* contains **sizeof**\ (**struct iphdr**) or + * **sizeof**\ (**struct ip6hdr**). + * + * *th* points to the start of the TCP header, while *th_len* + * contains the length of the TCP header (at least + * **sizeof**\ (**struct tcphdr**)). + * Return + * 0 if *iph* and *th* are a valid SYN cookie ACK. + * + * On failure, the returned value is one of the following: + * + * **-EACCES** if the SYN cookie is not valid. + * + * **-EINVAL** if the packet or input arguments are invalid. + * + * **-EOPNOTSUPP** if the kernel configuration does not enable SYN + * cookies (CONFIG_SYN_COOKIES is off). + * + * **-EPROTONOSUPPORT** if the IP version is not 4 or 6 (or 6, but + * CONFIG_IPV6 is disabled). */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -5283,6 +5338,8 @@ union bpf_attr { FN(xdp_get_buff_len), \ FN(xdp_load_bytes), \ FN(xdp_store_bytes), \ + FN(tcp_raw_gen_syncookie), \ + FN(tcp_raw_check_syncookie), \ /* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper
Maxim Mikityanskiy wrote:
The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an XDP program to generate SYN cookies in response to TCP SYN packets and to check those cookies upon receiving the first ACK packet (the final packet of the TCP handshake).
Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a listening socket on the local machine, which allows to use them together with synproxy to accelerate SYN cookie generation.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com
[...]
+BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32, iph_len,
struct tcphdr *, th, u32, th_len)
+{ +#ifdef CONFIG_SYN_COOKIES
- u32 cookie;
- int ret;
- if (unlikely(th_len < sizeof(*th)))
return -EINVAL;
- if (!th->ack || th->rst || th->syn)
return -EINVAL;
- if (unlikely(iph_len < sizeof(struct iphdr)))
return -EINVAL;
- cookie = ntohl(th->ack_seq) - 1;
- /* Both struct iphdr and struct ipv6hdr have the version field at the
* same offset so we can cast to the shorter header (struct iphdr).
*/
- switch (((struct iphdr *)iph)->version) {
- case 4:
Did you consider just exposing __cookie_v4_check() and __cookie_v6_check()? My code at least has already run the code above before it would ever call this helper so all the other bits are duplicate. The only reason to build it this way, as I see it, is either code can call it blindly without doing 4/v6 switch. or to make it look and feel like 'tc' world, but its already dropped the ok so its a bit different already and ifdef TC/XDP could hanlde the different parts.
ret = __cookie_v4_check((struct iphdr *)iph, th, cookie);
break;
+#if IS_BUILTIN(CONFIG_IPV6)
- case 6:
if (unlikely(iph_len < sizeof(struct ipv6hdr)))
return -EINVAL;
ret = __cookie_v6_check((struct ipv6hdr *)iph, th, cookie);
break;
+#endif /* CONFIG_IPV6 */
- default:
return -EPROTONOSUPPORT;
- }
- if (ret > 0)
return 0;
- return -EACCES;
+#else
- return -EOPNOTSUPP;
+#endif +}
On 2022-01-25 09:54, John Fastabend wrote:
Maxim Mikityanskiy wrote:
The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an XDP program to generate SYN cookies in response to TCP SYN packets and to check those cookies upon receiving the first ACK packet (the final packet of the TCP handshake).
Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a listening socket on the local machine, which allows to use them together with synproxy to accelerate SYN cookie generation.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com
[...]
+BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32, iph_len,
struct tcphdr *, th, u32, th_len)
+{ +#ifdef CONFIG_SYN_COOKIES
- u32 cookie;
- int ret;
- if (unlikely(th_len < sizeof(*th)))
return -EINVAL;
- if (!th->ack || th->rst || th->syn)
return -EINVAL;
- if (unlikely(iph_len < sizeof(struct iphdr)))
return -EINVAL;
- cookie = ntohl(th->ack_seq) - 1;
- /* Both struct iphdr and struct ipv6hdr have the version field at the
* same offset so we can cast to the shorter header (struct iphdr).
*/
- switch (((struct iphdr *)iph)->version) {
- case 4:
Did you consider just exposing __cookie_v4_check() and __cookie_v6_check()?
No, I didn't, I just implemented it consistently with bpf_tcp_check_syncookie, but let's consider it.
I can't just pass a pointer from BPF without passing the size, so I would need some wrappers around __cookie_v{4,6}_check anyway. The checks for th_len and iph_len would have to stay in the helpers. The check for TCP flags (ACK, !RST, !SYN) could be either in the helper or in BPF. The switch would obviously be gone.
The bottom line is that it would be the same code, but without the switch, and repeated twice. What benefit do you see in this approach? From my side, I only see the ability to drop one branch at the expense of duplicating the code above the switch (th_len and iph_len checks).
My code at least has already run the code above before it would ever call this helper so all the other bits are duplicate.
Sorry, I didn't quite understand this part. What "your code" are you referring to?
The only reason to build it this way, as I see it, is either code can call it blindly without doing 4/v6 switch. or to make it look and feel like 'tc' world, but its already dropped the ok so its a bit different already and ifdef TC/XDP could hanlde the different parts.
ret = __cookie_v4_check((struct iphdr *)iph, th, cookie);
break;
+#if IS_BUILTIN(CONFIG_IPV6)
- case 6:
if (unlikely(iph_len < sizeof(struct ipv6hdr)))
return -EINVAL;
ret = __cookie_v6_check((struct ipv6hdr *)iph, th, cookie);
break;
+#endif /* CONFIG_IPV6 */
- default:
return -EPROTONOSUPPORT;
- }
- if (ret > 0)
return 0;
- return -EACCES;
+#else
- return -EOPNOTSUPP;
+#endif +}
Maxim Mikityanskiy wrote:
On 2022-01-25 09:54, John Fastabend wrote:
Maxim Mikityanskiy wrote:
The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an XDP program to generate SYN cookies in response to TCP SYN packets and to check those cookies upon receiving the first ACK packet (the final packet of the TCP handshake).
Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a listening socket on the local machine, which allows to use them together with synproxy to accelerate SYN cookie generation.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com
[...]
+BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32, iph_len,
struct tcphdr *, th, u32, th_len)
+{ +#ifdef CONFIG_SYN_COOKIES
- u32 cookie;
- int ret;
- if (unlikely(th_len < sizeof(*th)))
return -EINVAL;
- if (!th->ack || th->rst || th->syn)
return -EINVAL;
- if (unlikely(iph_len < sizeof(struct iphdr)))
return -EINVAL;
- cookie = ntohl(th->ack_seq) - 1;
- /* Both struct iphdr and struct ipv6hdr have the version field at the
* same offset so we can cast to the shorter header (struct iphdr).
*/
- switch (((struct iphdr *)iph)->version) {
- case 4:
Did you consider just exposing __cookie_v4_check() and __cookie_v6_check()?
No, I didn't, I just implemented it consistently with bpf_tcp_check_syncookie, but let's consider it.
I can't just pass a pointer from BPF without passing the size, so I would need some wrappers around __cookie_v{4,6}_check anyway. The checks for th_len and iph_len would have to stay in the helpers. The check for TCP flags (ACK, !RST, !SYN) could be either in the helper or in BPF. The switch would obviously be gone.
I'm not sure you would need the len checks in helper, they provide some guarantees I guess, but the void * is just memory I don't see any checks on its size. It could be the last byte of a value for example?
The bottom line is that it would be the same code, but without the switch, and repeated twice. What benefit do you see in this approach?
The only benefit would be to shave some instructions off the program. XDP is about performance so I figure we shouldn't be adding arbitrary stuff here. OTOH you're already jumping into a helper so it might not matter at all.
From my side, I only see the ability to drop one branch at the expense of duplicating the code above the switch (th_len and iph_len checks).
Just not sure you need the checks either, can you just assume the user gives good data?
My code at least has already run the code above before it would ever call this helper so all the other bits are duplicate.
Sorry, I didn't quite understand this part. What "your code" are you referring to?
Just that the XDP code I maintain has a if ipv4 {...} else ipv6{...} structure in it so could use a v4_check... and v6_check... then call the correct version directly, removing the switch from the helper.
Do you think there could be a performance reason to drop out those instructions or is it just hid by the hash itself. Also it seems a bit annoying if user is calling multiple helpers and they keep doing the same checks over and over.
John Fastabend wrote:
Maxim Mikityanskiy wrote:
On 2022-01-25 09:54, John Fastabend wrote:
Maxim Mikityanskiy wrote:
The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an XDP program to generate SYN cookies in response to TCP SYN packets and to check those cookies upon receiving the first ACK packet (the final packet of the TCP handshake).
Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a listening socket on the local machine, which allows to use them together with synproxy to accelerate SYN cookie generation.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com
[...]
+BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32, iph_len,
struct tcphdr *, th, u32, th_len)
+{ +#ifdef CONFIG_SYN_COOKIES
- u32 cookie;
- int ret;
- if (unlikely(th_len < sizeof(*th)))
return -EINVAL;
- if (!th->ack || th->rst || th->syn)
return -EINVAL;
- if (unlikely(iph_len < sizeof(struct iphdr)))
return -EINVAL;
- cookie = ntohl(th->ack_seq) - 1;
- /* Both struct iphdr and struct ipv6hdr have the version field at the
* same offset so we can cast to the shorter header (struct iphdr).
*/
- switch (((struct iphdr *)iph)->version) {
- case 4:
Did you consider just exposing __cookie_v4_check() and __cookie_v6_check()?
No, I didn't, I just implemented it consistently with bpf_tcp_check_syncookie, but let's consider it.
I can't just pass a pointer from BPF without passing the size, so I would need some wrappers around __cookie_v{4,6}_check anyway. The checks for th_len and iph_len would have to stay in the helpers. The check for TCP flags (ACK, !RST, !SYN) could be either in the helper or in BPF. The switch would obviously be gone.
I'm not sure you would need the len checks in helper, they provide some guarantees I guess, but the void * is just memory I don't see any checks on its size. It could be the last byte of a value for example?
I suspect we need to add verifier checks here anyways to ensure we don't walk off the end of a value unless something else is ensuring the iph is inside a valid memory block.
The bottom line is that it would be the same code, but without the switch, and repeated twice. What benefit do you see in this approach?
The only benefit would be to shave some instructions off the program. XDP is about performance so I figure we shouldn't be adding arbitrary stuff here. OTOH you're already jumping into a helper so it might not matter at all.
From my side, I only see the ability to drop one branch at the expense of duplicating the code above the switch (th_len and iph_len checks).
Just not sure you need the checks either, can you just assume the user gives good data?
My code at least has already run the code above before it would ever call this helper so all the other bits are duplicate.
Sorry, I didn't quite understand this part. What "your code" are you referring to?
Just that the XDP code I maintain has a if ipv4 {...} else ipv6{...} structure in it so could use a v4_check... and v6_check... then call the correct version directly, removing the switch from the helper.
Do you think there could be a performance reason to drop out those instructions or is it just hid by the hash itself. Also it seems a bit annoying if user is calling multiple helpers and they keep doing the same checks over and over.
On 2022-01-31 23:19, John Fastabend wrote:
John Fastabend wrote:
Maxim Mikityanskiy wrote:
On 2022-01-25 09:54, John Fastabend wrote:
Maxim Mikityanskiy wrote:
The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an XDP program to generate SYN cookies in response to TCP SYN packets and to check those cookies upon receiving the first ACK packet (the final packet of the TCP handshake).
Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a listening socket on the local machine, which allows to use them together with synproxy to accelerate SYN cookie generation.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com
[...]
+BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32, iph_len,
struct tcphdr *, th, u32, th_len)
+{ +#ifdef CONFIG_SYN_COOKIES
- u32 cookie;
- int ret;
- if (unlikely(th_len < sizeof(*th)))
return -EINVAL;
- if (!th->ack || th->rst || th->syn)
return -EINVAL;
- if (unlikely(iph_len < sizeof(struct iphdr)))
return -EINVAL;
- cookie = ntohl(th->ack_seq) - 1;
- /* Both struct iphdr and struct ipv6hdr have the version field at the
* same offset so we can cast to the shorter header (struct iphdr).
*/
- switch (((struct iphdr *)iph)->version) {
- case 4:
Did you consider just exposing __cookie_v4_check() and __cookie_v6_check()?
No, I didn't, I just implemented it consistently with bpf_tcp_check_syncookie, but let's consider it.
I can't just pass a pointer from BPF without passing the size, so I would need some wrappers around __cookie_v{4,6}_check anyway. The checks for th_len and iph_len would have to stay in the helpers. The check for TCP flags (ACK, !RST, !SYN) could be either in the helper or in BPF. The switch would obviously be gone.
I'm not sure you would need the len checks in helper, they provide some guarantees I guess, but the void * is just memory I don't see any checks on its size. It could be the last byte of a value for example?
The verifier makes sure that the packet pointer and the size come together in function parameters (see check_arg_pair_ok). It also makes sure that the memory region defined by these two parameters is valid, i.e. in our case it belongs to packet data.
Now that the helper got a valid memory region, its length is still arbitrary. The helper has to check it's big enough to contain a TCP header, before trying to access its fields. Hence the checks in the helper.
I suspect we need to add verifier checks here anyways to ensure we don't walk off the end of a value unless something else is ensuring the iph is inside a valid memory block.
The verifier ensures that the [iph; iph+iph_len) is valid memory, but the helper still has to check that struct iphdr fits into this region. Otherwise iph_len could be too small, and the helper would access memory outside of the valid region.
The bottom line is that it would be the same code, but without the switch, and repeated twice. What benefit do you see in this approach?
The only benefit would be to shave some instructions off the program. XDP is about performance so I figure we shouldn't be adding arbitrary stuff here. OTOH you're already jumping into a helper so it might not matter at all.
From my side, I only see the ability to drop one branch at the expense of duplicating the code above the switch (th_len and iph_len checks).
Just not sure you need the checks either, can you just assume the user gives good data?
No, since the BPF program would be able to trick the kernel into reading from an invalid location (see the explanation above).
My code at least has already run the code above before it would ever call this helper so all the other bits are duplicate.
Sorry, I didn't quite understand this part. What "your code" are you referring to?
Just that the XDP code I maintain has a if ipv4 {...} else ipv6{...} structure
Same for my code (see the last patch in the series).
Splitting into two helpers would allow to drop the extra switch in the helper, however:
1. The code will be duplicated for the checks.
2. It won't be consistent with bpf_tcp_check_syncookie (and all other existing helpers - as far as I see, there is no split for IPv4/IPv6).
3. It's easier to misuse, e.g., pass an IPv6 header to the IPv4 helper. This point is controversial, since it shouldn't pose any additional security threat, but in my opinion, it's better to be foolproof. That means, I'd add the IP version check even to the separate helpers, which defeats the purpose of separating them.
Given these points, I'd prefer to keep it a single helper. However, if you have strong objections, I can split it.
in it so could use a v4_check... and v6_check... then call the correct version directly, removing the switch from the helper.
Do you think there could be a performance reason to drop out those instructions or is it just hid by the hash itself. Also it seems a bit annoying if user is calling multiple helpers and they keep doing the same checks over and over.
Maxim Mikityanskiy wrote:
On 2022-01-31 23:19, John Fastabend wrote:
John Fastabend wrote:
Maxim Mikityanskiy wrote:
On 2022-01-25 09:54, John Fastabend wrote:
Maxim Mikityanskiy wrote:
The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an XDP program to generate SYN cookies in response to TCP SYN packets and to check those cookies upon receiving the first ACK packet (the final packet of the TCP handshake).
Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a listening socket on the local machine, which allows to use them together with synproxy to accelerate SYN cookie generation.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com
[...]
+BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32, iph_len,
struct tcphdr *, th, u32, th_len)
+{ +#ifdef CONFIG_SYN_COOKIES
- u32 cookie;
- int ret;
- if (unlikely(th_len < sizeof(*th)))
return -EINVAL;
- if (!th->ack || th->rst || th->syn)
return -EINVAL;
- if (unlikely(iph_len < sizeof(struct iphdr)))
return -EINVAL;
- cookie = ntohl(th->ack_seq) - 1;
- /* Both struct iphdr and struct ipv6hdr have the version field at the
* same offset so we can cast to the shorter header (struct iphdr).
*/
- switch (((struct iphdr *)iph)->version) {
- case 4:
Did you consider just exposing __cookie_v4_check() and __cookie_v6_check()?
No, I didn't, I just implemented it consistently with bpf_tcp_check_syncookie, but let's consider it.
I can't just pass a pointer from BPF without passing the size, so I would need some wrappers around __cookie_v{4,6}_check anyway. The checks for th_len and iph_len would have to stay in the helpers. The check for TCP flags (ACK, !RST, !SYN) could be either in the helper or in BPF. The switch would obviously be gone.
I'm not sure you would need the len checks in helper, they provide some guarantees I guess, but the void * is just memory I don't see any checks on its size. It could be the last byte of a value for example?
The verifier makes sure that the packet pointer and the size come together in function parameters (see check_arg_pair_ok). It also makes sure that the memory region defined by these two parameters is valid, i.e. in our case it belongs to packet data.
Now that the helper got a valid memory region, its length is still arbitrary. The helper has to check it's big enough to contain a TCP header, before trying to access its fields. Hence the checks in the helper.
I suspect we need to add verifier checks here anyways to ensure we don't walk off the end of a value unless something else is ensuring the iph is inside a valid memory block.
The verifier ensures that the [iph; iph+iph_len) is valid memory, but the helper still has to check that struct iphdr fits into this region. Otherwise iph_len could be too small, and the helper would access memory outside of the valid region.
Thanks for the details this all makes sense. See response to other mail about adding new types. Replied to the wrong email but I think the context is not lost.
The bottom line is that it would be the same code, but without the switch, and repeated twice. What benefit do you see in this approach?
The only benefit would be to shave some instructions off the program. XDP is about performance so I figure we shouldn't be adding arbitrary stuff here. OTOH you're already jumping into a helper so it might not matter at all.
From my side, I only see the ability to drop one branch at the expense of duplicating the code above the switch (th_len and iph_len checks).
Just not sure you need the checks either, can you just assume the user gives good data?
No, since the BPF program would be able to trick the kernel into reading from an invalid location (see the explanation above).
My code at least has already run the code above before it would ever call this helper so all the other bits are duplicate.
Sorry, I didn't quite understand this part. What "your code" are you referring to?
Just that the XDP code I maintain has a if ipv4 {...} else ipv6{...} structure
Same for my code (see the last patch in the series).
Splitting into two helpers would allow to drop the extra switch in the helper, however:
- The code will be duplicated for the checks.
See response wrt PTR_TO_IP, PTR_TO_TCP types.
- It won't be consistent with bpf_tcp_check_syncookie (and all other
existing helpers - as far as I see, there is no split for IPv4/IPv6).
This does seem useful to me.
- It's easier to misuse, e.g., pass an IPv6 header to the IPv4 helper.
This point is controversial, since it shouldn't pose any additional security threat, but in my opinion, it's better to be foolproof. That means, I'd add the IP version check even to the separate helpers, which defeats the purpose of separating them.
Not really convinced that we need to guard against misuse. This is down in XDP space its not a place we should be adding extra insns to stop developers from hurting themselves, just as a general rule.
Given these points, I'd prefer to keep it a single helper. However, if you have strong objections, I can split it.
I think (2) is the strongest argument combined with the call is heavy operation and saving some cycles maybe isn't terribly important, but its XDP land again and insns matter IMO. I'm on the fence maybe others have opinions?
Sorry for diverging the thread a bit there with the two replies.
Thanks, John
John Fastabend john.fastabend@gmail.com writes:
Maxim Mikityanskiy wrote:
On 2022-01-31 23:19, John Fastabend wrote:
John Fastabend wrote:
Maxim Mikityanskiy wrote:
On 2022-01-25 09:54, John Fastabend wrote:
Maxim Mikityanskiy wrote: > The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an XDP program > to generate SYN cookies in response to TCP SYN packets and to check > those cookies upon receiving the first ACK packet (the final packet of > the TCP handshake). > > Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a > listening socket on the local machine, which allows to use them together > with synproxy to accelerate SYN cookie generation. > > Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com > Reviewed-by: Tariq Toukan tariqt@nvidia.com > ---
[...]
> + > +BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32, iph_len, > + struct tcphdr *, th, u32, th_len) > +{ > +#ifdef CONFIG_SYN_COOKIES > + u32 cookie; > + int ret; > + > + if (unlikely(th_len < sizeof(*th))) > + return -EINVAL; > + > + if (!th->ack || th->rst || th->syn) > + return -EINVAL; > + > + if (unlikely(iph_len < sizeof(struct iphdr))) > + return -EINVAL; > + > + cookie = ntohl(th->ack_seq) - 1; > + > + /* Both struct iphdr and struct ipv6hdr have the version field at the > + * same offset so we can cast to the shorter header (struct iphdr). > + */ > + switch (((struct iphdr *)iph)->version) { > + case 4:
Did you consider just exposing __cookie_v4_check() and __cookie_v6_check()?
No, I didn't, I just implemented it consistently with bpf_tcp_check_syncookie, but let's consider it.
I can't just pass a pointer from BPF without passing the size, so I would need some wrappers around __cookie_v{4,6}_check anyway. The checks for th_len and iph_len would have to stay in the helpers. The check for TCP flags (ACK, !RST, !SYN) could be either in the helper or in BPF. The switch would obviously be gone.
I'm not sure you would need the len checks in helper, they provide some guarantees I guess, but the void * is just memory I don't see any checks on its size. It could be the last byte of a value for example?
The verifier makes sure that the packet pointer and the size come together in function parameters (see check_arg_pair_ok). It also makes sure that the memory region defined by these two parameters is valid, i.e. in our case it belongs to packet data.
Now that the helper got a valid memory region, its length is still arbitrary. The helper has to check it's big enough to contain a TCP header, before trying to access its fields. Hence the checks in the helper.
I suspect we need to add verifier checks here anyways to ensure we don't walk off the end of a value unless something else is ensuring the iph is inside a valid memory block.
The verifier ensures that the [iph; iph+iph_len) is valid memory, but the helper still has to check that struct iphdr fits into this region. Otherwise iph_len could be too small, and the helper would access memory outside of the valid region.
Thanks for the details this all makes sense. See response to other mail about adding new types. Replied to the wrong email but I think the context is not lost.
Keeping my reply here in an attempt to de-fork :)
The bottom line is that it would be the same code, but without the switch, and repeated twice. What benefit do you see in this approach?
The only benefit would be to shave some instructions off the program. XDP is about performance so I figure we shouldn't be adding arbitrary stuff here. OTOH you're already jumping into a helper so it might not matter at all.
From my side, I only see the ability to drop one branch at the expense of duplicating the code above the switch (th_len and iph_len checks).
Just not sure you need the checks either, can you just assume the user gives good data?
No, since the BPF program would be able to trick the kernel into reading from an invalid location (see the explanation above).
My code at least has already run the code above before it would ever call this helper so all the other bits are duplicate.
Sorry, I didn't quite understand this part. What "your code" are you referring to?
Just that the XDP code I maintain has a if ipv4 {...} else ipv6{...} structure
Same for my code (see the last patch in the series).
Splitting into two helpers would allow to drop the extra switch in the helper, however:
- The code will be duplicated for the checks.
See response wrt PTR_TO_IP, PTR_TO_TCP types.
So about that (quoting some context from your other email):
We could have some new mem types, PTR_TO_IPV4, PTR_TO_IPv6, and PTR_TO_TCP. Then we simplify the helper signatures to just,
bpf_tcp_raw_check_syncookie_v4(iph, tcph); bpf_tcp_raw_check_syncookie_v6(iph, tcph);
And the verifier "knows" what a v4/v6 header is and does the mem check at verification time instead of run time.
I think this could probably be achieved with PTR_TO_BTF arguments to the helper (if we define appropriate struct types that the program can use for each header type)?
It doesn't really guard against pointing into the wrong bit of the packet (or somewhere else entirely), so the header can still contain garbage, but at least the len check should be handled automatically with PTR_TO_BTF, and we avoid the need to define a whole bunch of new PTR_TO_* types...
- It won't be consistent with bpf_tcp_check_syncookie (and all other
existing helpers - as far as I see, there is no split for IPv4/IPv6).
This does seem useful to me.
If it's consistency we're after we could split the others as well? I guess the main drawback here is code bloat (can't inline the functions as they need to be available for BPF_CALL, so we either get duplicates or an additional function call overhead for the old helper if it just calls the new ones).
- It's easier to misuse, e.g., pass an IPv6 header to the IPv4 helper.
This point is controversial, since it shouldn't pose any additional security threat, but in my opinion, it's better to be foolproof. That means, I'd add the IP version check even to the separate helpers, which defeats the purpose of separating them.
Not really convinced that we need to guard against misuse. This is down in XDP space its not a place we should be adding extra insns to stop developers from hurting themselves, just as a general rule.
Yeah, I think in general for XDP, if you pass garbage data to the helpers to get to keep the pieces when it breaks. We need to make sure the *kernel* doesn't misbehave (i.e., no crashing, and no invalid state being created inside the kernel), but it's up to the XDP program author to use the API correctly...
Given these points, I'd prefer to keep it a single helper. However, if you have strong objections, I can split it.
I think (2) is the strongest argument combined with the call is heavy operation and saving some cycles maybe isn't terribly important, but its XDP land again and insns matter IMO. I'm on the fence maybe others have opinions?
It's not a very strong opinion, but I think in general, optimising for performance in XDP is the right thing to do. That's kinda what it's for :)
-Toke
On 2022-02-04 16:08, Toke Høiland-Jørgensen wrote:
John Fastabend john.fastabend@gmail.com writes:
Maxim Mikityanskiy wrote:
On 2022-01-31 23:19, John Fastabend wrote:
John Fastabend wrote:
Maxim Mikityanskiy wrote:
On 2022-01-25 09:54, John Fastabend wrote: > Maxim Mikityanskiy wrote: >> The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an XDP program >> to generate SYN cookies in response to TCP SYN packets and to check >> those cookies upon receiving the first ACK packet (the final packet of >> the TCP handshake). >> >> Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a >> listening socket on the local machine, which allows to use them together >> with synproxy to accelerate SYN cookie generation. >> >> Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com >> Reviewed-by: Tariq Toukan tariqt@nvidia.com >> --- > > [...] > >> + >> +BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32, iph_len, >> + struct tcphdr *, th, u32, th_len) >> +{ >> +#ifdef CONFIG_SYN_COOKIES >> + u32 cookie; >> + int ret; >> + >> + if (unlikely(th_len < sizeof(*th))) >> + return -EINVAL; >> + >> + if (!th->ack || th->rst || th->syn) >> + return -EINVAL; >> + >> + if (unlikely(iph_len < sizeof(struct iphdr))) >> + return -EINVAL; >> + >> + cookie = ntohl(th->ack_seq) - 1; >> + >> + /* Both struct iphdr and struct ipv6hdr have the version field at the >> + * same offset so we can cast to the shorter header (struct iphdr). >> + */ >> + switch (((struct iphdr *)iph)->version) { >> + case 4: > > Did you consider just exposing __cookie_v4_check() and __cookie_v6_check()?
No, I didn't, I just implemented it consistently with bpf_tcp_check_syncookie, but let's consider it.
I can't just pass a pointer from BPF without passing the size, so I would need some wrappers around __cookie_v{4,6}_check anyway. The checks for th_len and iph_len would have to stay in the helpers. The check for TCP flags (ACK, !RST, !SYN) could be either in the helper or in BPF. The switch would obviously be gone.
I'm not sure you would need the len checks in helper, they provide some guarantees I guess, but the void * is just memory I don't see any checks on its size. It could be the last byte of a value for example?
The verifier makes sure that the packet pointer and the size come together in function parameters (see check_arg_pair_ok). It also makes sure that the memory region defined by these two parameters is valid, i.e. in our case it belongs to packet data.
Now that the helper got a valid memory region, its length is still arbitrary. The helper has to check it's big enough to contain a TCP header, before trying to access its fields. Hence the checks in the helper.
I suspect we need to add verifier checks here anyways to ensure we don't walk off the end of a value unless something else is ensuring the iph is inside a valid memory block.
The verifier ensures that the [iph; iph+iph_len) is valid memory, but the helper still has to check that struct iphdr fits into this region. Otherwise iph_len could be too small, and the helper would access memory outside of the valid region.
Thanks for the details this all makes sense. See response to other mail about adding new types. Replied to the wrong email but I think the context is not lost.
Keeping my reply here in an attempt to de-fork :)
The bottom line is that it would be the same code, but without the switch, and repeated twice. What benefit do you see in this approach?
The only benefit would be to shave some instructions off the program. XDP is about performance so I figure we shouldn't be adding arbitrary stuff here. OTOH you're already jumping into a helper so it might not matter at all.
From my side, I only see the ability to drop one branch at the expense of duplicating the code above the switch (th_len and iph_len checks).
Just not sure you need the checks either, can you just assume the user gives good data?
No, since the BPF program would be able to trick the kernel into reading from an invalid location (see the explanation above).
> My code at least has already run the code above before it would ever call > this helper so all the other bits are duplicate.
Sorry, I didn't quite understand this part. What "your code" are you referring to?
Just that the XDP code I maintain has a if ipv4 {...} else ipv6{...} structure
Same for my code (see the last patch in the series).
Splitting into two helpers would allow to drop the extra switch in the helper, however:
- The code will be duplicated for the checks.
See response wrt PTR_TO_IP, PTR_TO_TCP types.
So about that (quoting some context from your other email):
We could have some new mem types, PTR_TO_IPV4, PTR_TO_IPv6, and PTR_TO_TCP. Then we simplify the helper signatures to just,
bpf_tcp_raw_check_syncookie_v4(iph, tcph); bpf_tcp_raw_check_syncookie_v6(iph, tcph);
And the verifier "knows" what a v4/v6 header is and does the mem check at verification time instead of run time.
I think this could probably be achieved with PTR_TO_BTF arguments to the helper (if we define appropriate struct types that the program can use for each header type)?
I get the following error when I try to pass the headers from packet data to a helper that accepts ARG_PTR_TO_BTF_ID:
; value = bpf_tcp_raw_gen_syncookie_ipv4(hdr->ipv4, hdr->tcp, 297: (79) r1 = *(u64 *)(r10 -80) ; R1_w=pkt(id=0,off=14,r=74,imm=0) R10=fp0 298: (79) r2 = *(u64 *)(r10 -72) ; R2_w=pkt(id=5,off=14,r=74,umax_value=60,var_off=(0x0; 0x3c)) R10=fp0 299: (bc) w3 = w9 ; R3_w=invP(id=0,umin_value=20,umax_value=60,var_off=(0x0; 0x3c)) R9=invP(id=0,umin_value=20,umax_value=60,var_off=(0x0; 0x3c)) 300: (85) call bpf_tcp_raw_gen_syncookie_ipv4#192 R1 type=pkt expected=ptr_ processed 317 insns (limit 1000000) max_states_per_insn 0 total_states 23 peak_states 23 mark_read 12 -- END PROG LOAD LOG --
It looks like the verifier doesn't currently support such type conversion. Could you give any clue what is needed to add this support? Is it enough to extend compatible_reg_types, or should more checks be added anywhere?
Alternatively, I can revert to ARG_PTR_TO_MEM and do size checks in runtime in the helper.
It doesn't really guard against pointing into the wrong bit of the packet (or somewhere else entirely), so the header can still contain garbage, but at least the len check should be handled automatically with PTR_TO_BTF, and we avoid the need to define a whole bunch of new PTR_TO_* types...
- It won't be consistent with bpf_tcp_check_syncookie (and all other
existing helpers - as far as I see, there is no split for IPv4/IPv6).
This does seem useful to me.
If it's consistency we're after we could split the others as well? I guess the main drawback here is code bloat (can't inline the functions as they need to be available for BPF_CALL, so we either get duplicates or an additional function call overhead for the old helper if it just calls the new ones).
- It's easier to misuse, e.g., pass an IPv6 header to the IPv4 helper.
This point is controversial, since it shouldn't pose any additional security threat, but in my opinion, it's better to be foolproof. That means, I'd add the IP version check even to the separate helpers, which defeats the purpose of separating them.
Not really convinced that we need to guard against misuse. This is down in XDP space its not a place we should be adding extra insns to stop developers from hurting themselves, just as a general rule.
Yeah, I think in general for XDP, if you pass garbage data to the helpers to get to keep the pieces when it breaks. We need to make sure the *kernel* doesn't misbehave (i.e., no crashing, and no invalid state being created inside the kernel), but it's up to the XDP program author to use the API correctly...
Given these points, I'd prefer to keep it a single helper. However, if you have strong objections, I can split it.
I think (2) is the strongest argument combined with the call is heavy operation and saving some cycles maybe isn't terribly important, but its XDP land again and insns matter IMO. I'm on the fence maybe others have opinions?
It's not a very strong opinion, but I think in general, optimising for performance in XDP is the right thing to do. That's kinda what it's for :)
-Toke
On Mon, Feb 21, 2022 at 07:56:28PM IST, Maxim Mikityanskiy wrote:
On 2022-02-04 16:08, Toke Høiland-Jørgensen wrote:
John Fastabend john.fastabend@gmail.com writes:
Maxim Mikityanskiy wrote:
On 2022-01-31 23:19, John Fastabend wrote:
John Fastabend wrote:
Maxim Mikityanskiy wrote: > On 2022-01-25 09:54, John Fastabend wrote: > > Maxim Mikityanskiy wrote: > > > The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an XDP program > > > to generate SYN cookies in response to TCP SYN packets and to check > > > those cookies upon receiving the first ACK packet (the final packet of > > > the TCP handshake). > > > > > > Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a > > > listening socket on the local machine, which allows to use them together > > > with synproxy to accelerate SYN cookie generation. > > > > > > Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com > > > Reviewed-by: Tariq Toukan tariqt@nvidia.com > > > --- > > > > [...] > > > > > + > > > +BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32, iph_len, > > > + struct tcphdr *, th, u32, th_len) > > > +{ > > > +#ifdef CONFIG_SYN_COOKIES > > > + u32 cookie; > > > + int ret; > > > + > > > + if (unlikely(th_len < sizeof(*th))) > > > + return -EINVAL; > > > + > > > + if (!th->ack || th->rst || th->syn) > > > + return -EINVAL; > > > + > > > + if (unlikely(iph_len < sizeof(struct iphdr))) > > > + return -EINVAL; > > > + > > > + cookie = ntohl(th->ack_seq) - 1; > > > + > > > + /* Both struct iphdr and struct ipv6hdr have the version field at the > > > + * same offset so we can cast to the shorter header (struct iphdr). > > > + */ > > > + switch (((struct iphdr *)iph)->version) { > > > + case 4: > > > > Did you consider just exposing __cookie_v4_check() and __cookie_v6_check()? > > No, I didn't, I just implemented it consistently with > bpf_tcp_check_syncookie, but let's consider it. > > I can't just pass a pointer from BPF without passing the size, so I > would need some wrappers around __cookie_v{4,6}_check anyway. The checks > for th_len and iph_len would have to stay in the helpers. The check for > TCP flags (ACK, !RST, !SYN) could be either in the helper or in BPF. The > switch would obviously be gone.
I'm not sure you would need the len checks in helper, they provide some guarantees I guess, but the void * is just memory I don't see any checks on its size. It could be the last byte of a value for example?
The verifier makes sure that the packet pointer and the size come together in function parameters (see check_arg_pair_ok). It also makes sure that the memory region defined by these two parameters is valid, i.e. in our case it belongs to packet data.
Now that the helper got a valid memory region, its length is still arbitrary. The helper has to check it's big enough to contain a TCP header, before trying to access its fields. Hence the checks in the helper.
I suspect we need to add verifier checks here anyways to ensure we don't walk off the end of a value unless something else is ensuring the iph is inside a valid memory block.
The verifier ensures that the [iph; iph+iph_len) is valid memory, but the helper still has to check that struct iphdr fits into this region. Otherwise iph_len could be too small, and the helper would access memory outside of the valid region.
Thanks for the details this all makes sense. See response to other mail about adding new types. Replied to the wrong email but I think the context is not lost.
Keeping my reply here in an attempt to de-fork :)
> > The bottom line is that it would be the same code, but without the > switch, and repeated twice. What benefit do you see in this approach?
The only benefit would be to shave some instructions off the program. XDP is about performance so I figure we shouldn't be adding arbitrary stuff here. OTOH you're already jumping into a helper so it might not matter at all.
> From my side, I only see the ability to drop one branch at the expense > of duplicating the code above the switch (th_len and iph_len checks).
Just not sure you need the checks either, can you just assume the user gives good data?
No, since the BPF program would be able to trick the kernel into reading from an invalid location (see the explanation above).
> > > My code at least has already run the code above before it would ever call > > this helper so all the other bits are duplicate. > > Sorry, I didn't quite understand this part. What "your code" are you > referring to?
Just that the XDP code I maintain has a if ipv4 {...} else ipv6{...} structure
Same for my code (see the last patch in the series).
Splitting into two helpers would allow to drop the extra switch in the helper, however:
- The code will be duplicated for the checks.
See response wrt PTR_TO_IP, PTR_TO_TCP types.
So about that (quoting some context from your other email):
We could have some new mem types, PTR_TO_IPV4, PTR_TO_IPv6, and PTR_TO_TCP. Then we simplify the helper signatures to just,
bpf_tcp_raw_check_syncookie_v4(iph, tcph); bpf_tcp_raw_check_syncookie_v6(iph, tcph);
And the verifier "knows" what a v4/v6 header is and does the mem check at verification time instead of run time.
I think this could probably be achieved with PTR_TO_BTF arguments to the helper (if we define appropriate struct types that the program can use for each header type)?
I get the following error when I try to pass the headers from packet data to a helper that accepts ARG_PTR_TO_BTF_ID:
; value = bpf_tcp_raw_gen_syncookie_ipv4(hdr->ipv4, hdr->tcp, 297: (79) r1 = *(u64 *)(r10 -80) ; R1_w=pkt(id=0,off=14,r=74,imm=0) R10=fp0 298: (79) r2 = *(u64 *)(r10 -72) ; R2_w=pkt(id=5,off=14,r=74,umax_value=60,var_off=(0x0; 0x3c)) R10=fp0 299: (bc) w3 = w9 ; R3_w=invP(id=0,umin_value=20,umax_value=60,var_off=(0x0; 0x3c)) R9=invP(id=0,umin_value=20,umax_value=60,var_off=(0x0; 0x3c)) 300: (85) call bpf_tcp_raw_gen_syncookie_ipv4#192 R1 type=pkt expected=ptr_ processed 317 insns (limit 1000000) max_states_per_insn 0 total_states 23 peak_states 23 mark_read 12 -- END PROG LOAD LOG --
It looks like the verifier doesn't currently support such type conversion. Could you give any clue what is needed to add this support? Is it enough to extend compatible_reg_types, or should more checks be added anywhere?
I think what he meant was getting the size hint from the function prototype. In case of kfunc we do it by resolving type size from BTF, for the PTR_TO_MEM case when a size argument is missing. For helper, you can add a field to indicate the constant size hint in the bpf_func_proto, and then in check_func_arg directly do the equivalent check_helper_mem_access for arg_type_is_mem_ptr block if such a hint is set, instead of delaying it till check_mem_size_reg call when the next arg_type_is_mem_size block is executed.
Then you can have two helpers with same argument types but different size hint values for the header argument, so you wouldn't need an extra mem size parameter.
You may also want to disallow setting both the size hint and next argument as ARG_CONST_SIZE.
Alternatively, I can revert to ARG_PTR_TO_MEM and do size checks in runtime in the helper.
[...]
-- Kartikeya
-----Original Message----- From: Kumar Kartikeya Dwivedi memxor@gmail.com Sent: 21 February, 2022 17:22 To: Maxim Mikityanskiy maximmi@nvidia.com Cc: Toke Høiland-Jørgensen toke@toke.dk; John Fastabend john.fastabend@gmail.com; bpf@vger.kernel.org; Alexei Starovoitov ast@kernel.org; Daniel Borkmann daniel@iogearbox.net; Andrii Nakryiko andrii@kernel.org; netdev@vger.kernel.org; Tariq Toukan tariqt@nvidia.com; Martin KaFai Lau kafai@fb.com; Song Liu songliubraving@fb.com; Yonghong Song yhs@fb.com; KP Singh kpsingh@kernel.org; David S. Miller davem@davemloft.net; Jakub Kicinski kuba@kernel.org; Petar Penkov ppenkov@google.com; Lorenz Bauer lmb@cloudflare.com; Eric Dumazet edumazet@google.com; Hideaki YOSHIFUJI yoshfuji@linux-ipv6.org; David Ahern dsahern@kernel.org; Shuah Khan shuah@kernel.org; Jesper Dangaard Brouer hawk@kernel.org; Nathan Chancellor nathan@kernel.org; Nick Desaulniers ndesaulniers@google.com; Joe Stringer joe@cilium.io; Florent Revest revest@chromium.org; linux- kselftest@vger.kernel.org; Florian Westphal fw@strlen.de Subject: Re: [PATCH bpf-next v2 2/3] bpf: Add helpers to issue and check SYN cookies in XDP
On Mon, Feb 21, 2022 at 07:56:28PM IST, Maxim Mikityanskiy wrote:
On 2022-02-04 16:08, Toke Høiland-Jørgensen wrote:
John Fastabend john.fastabend@gmail.com writes:
Maxim Mikityanskiy wrote:
On 2022-01-31 23:19, John Fastabend wrote:
John Fastabend wrote: > Maxim Mikityanskiy wrote: > > On 2022-01-25 09:54, John Fastabend wrote: > > > Maxim Mikityanskiy wrote: > > > > The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an
XDP program
> > > > to generate SYN cookies in response to TCP SYN packets and
to check
> > > > those cookies upon receiving the first ACK packet (the
final packet of
> > > > the TCP handshake). > > > > > > > > Unlike bpf_tcp_{gen,check}_syncookie these new helpers
don't need a
> > > > listening socket on the local machine, which allows to use
them together
> > > > with synproxy to accelerate SYN cookie generation. > > > > > > > > Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com > > > > Reviewed-by: Tariq Toukan tariqt@nvidia.com > > > > --- > > > > > > [...] > > > > > > > + > > > > +BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32,
iph_len,
> > > > + struct tcphdr *, th, u32, th_len) > > > > +{ > > > > +#ifdef CONFIG_SYN_COOKIES > > > > + u32 cookie; > > > > + int ret; > > > > + > > > > + if (unlikely(th_len < sizeof(*th))) > > > > + return -EINVAL; > > > > + > > > > + if (!th->ack || th->rst || th->syn) > > > > + return -EINVAL; > > > > + > > > > + if (unlikely(iph_len < sizeof(struct iphdr))) > > > > + return -EINVAL; > > > > + > > > > + cookie = ntohl(th->ack_seq) - 1; > > > > + > > > > + /* Both struct iphdr and struct ipv6hdr have the
version field at the
> > > > + * same offset so we can cast to the shorter header
(struct iphdr).
> > > > + */ > > > > + switch (((struct iphdr *)iph)->version) { > > > > + case 4: > > > > > > Did you consider just exposing __cookie_v4_check() and
__cookie_v6_check()?
> > > > No, I didn't, I just implemented it consistently with > > bpf_tcp_check_syncookie, but let's consider it. > > > > I can't just pass a pointer from BPF without passing the size,
so I
> > would need some wrappers around __cookie_v{4,6}_check anyway.
The checks
> > for th_len and iph_len would have to stay in the helpers. The
check for
> > TCP flags (ACK, !RST, !SYN) could be either in the helper or
in BPF. The
> > switch would obviously be gone. > > I'm not sure you would need the len checks in helper, they
provide
> some guarantees I guess, but the void * is just memory I don't
see
> any checks on its size. It could be the last byte of a value for > example?
The verifier makes sure that the packet pointer and the size come together in function parameters (see check_arg_pair_ok). It also
makes
sure that the memory region defined by these two parameters is
valid,
i.e. in our case it belongs to packet data.
Now that the helper got a valid memory region, its length is still arbitrary. The helper has to check it's big enough to contain a TCP header, before trying to access its fields. Hence the checks in the
helper.
I suspect we need to add verifier checks here anyways to ensure we
don't
walk off the end of a value unless something else is ensuring the
iph
is inside a valid memory block.
The verifier ensures that the [iph; iph+iph_len) is valid memory,
but
the helper still has to check that struct iphdr fits into this
region.
Otherwise iph_len could be too small, and the helper would access
memory
outside of the valid region.
Thanks for the details this all makes sense. See response to other mail about adding new types. Replied to the wrong email but I think the context is not lost.
Keeping my reply here in an attempt to de-fork :)
> > > > The bottom line is that it would be the same code, but without
the
> > switch, and repeated twice. What benefit do you see in this
approach?
> > The only benefit would be to shave some instructions off the
program.
> XDP is about performance so I figure we shouldn't be adding
arbitrary
> stuff here. OTOH you're already jumping into a helper so it
might
> not matter at all. > > > From my side, I only see the ability to drop one branch at
the expense
> > of duplicating the code above the switch (th_len and iph_len
checks).
> > Just not sure you need the checks either, can you just assume
the user
> gives good data?
No, since the BPF program would be able to trick the kernel into
reading
from an invalid location (see the explanation above).
> > > > > My code at least has already run the code above before it
would ever call
> > > this helper so all the other bits are duplicate. > > > > Sorry, I didn't quite understand this part. What "your code"
are you
> > referring to? > > Just that the XDP code I maintain has a if ipv4 {...} else
ipv6{...}
> structure
Same for my code (see the last patch in the series).
Splitting into two helpers would allow to drop the extra switch in
the
helper, however:
- The code will be duplicated for the checks.
See response wrt PTR_TO_IP, PTR_TO_TCP types.
So about that (quoting some context from your other email):
We could have some new mem types, PTR_TO_IPV4, PTR_TO_IPv6, and
PTR_TO_TCP.
Then we simplify the helper signatures to just,
bpf_tcp_raw_check_syncookie_v4(iph, tcph); bpf_tcp_raw_check_syncookie_v6(iph, tcph);
And the verifier "knows" what a v4/v6 header is and does the mem check at verification time instead of run time.
I think this could probably be achieved with PTR_TO_BTF arguments to the helper (if we define appropriate struct types that the program can use for each header type)?
I get the following error when I try to pass the headers from packet data
to
a helper that accepts ARG_PTR_TO_BTF_ID:
; value = bpf_tcp_raw_gen_syncookie_ipv4(hdr->ipv4, hdr->tcp, 297: (79) r1 = *(u64 *)(r10 -80) ; R1_w=pkt(id=0,off=14,r=74,imm=0) R10=fp0 298: (79) r2 = *(u64 *)(r10 -72) ; R2_w=pkt(id=5,off=14,r=74,umax_value=60,var_off=(0x0; 0x3c)) R10=fp0 299: (bc) w3 = w9 ; R3_w=invP(id=0,umin_value=20,umax_value=60,var_off=(0x0; 0x3c)) R9=invP(id=0,umin_value=20,umax_value=60,var_off=(0x0; 0x3c)) 300: (85) call bpf_tcp_raw_gen_syncookie_ipv4#192 R1 type=pkt expected=ptr_ processed 317 insns (limit 1000000) max_states_per_insn 0 total_states 23 peak_states 23 mark_read 12 -- END PROG LOAD LOG --
It looks like the verifier doesn't currently support such type conversion. Could you give any clue what is needed to add this support? Is it enough
to
extend compatible_reg_types, or should more checks be added anywhere?
I think what he meant was getting the size hint from the function prototype. In case of kfunc we do it by resolving type size from BTF, for the PTR_TO_MEM case when a size argument is missing. For helper, you can add a field to indicate the constant size hint in the bpf_func_proto, and then in check_func_arg directly do the equivalent check_helper_mem_access for arg_type_is_mem_ptr block if such a hint is set, instead of delaying it till check_mem_size_reg call when the next arg_type_is_mem_size block is executed.
Then you can have two helpers with same argument types but different size hint values for the header argument, so you wouldn't need an extra mem size parameter.
You may also want to disallow setting both the size hint and next argument as ARG_CONST_SIZE.
Thanks, I implemented your suggestion as a new feature of the verifier, and it works.
I'm ready to respin the series, I've split my new helpers, but splitting the existing helpers won't be part of resubmission, because it is out of scope of changes I intended to push. It can be added later as an improvement, though.
Alternatively, I can revert to ARG_PTR_TO_MEM and do size checks in
runtime
in the helper.
[...]
-- Kartikeya
Maxim Mikityanskiy wrote:
On 2022-01-25 09:54, John Fastabend wrote:
Maxim Mikityanskiy wrote:
The new helpers bpf_tcp_raw_{gen,check}_syncookie allow an XDP program to generate SYN cookies in response to TCP SYN packets and to check those cookies upon receiving the first ACK packet (the final packet of the TCP handshake).
Unlike bpf_tcp_{gen,check}_syncookie these new helpers don't need a listening socket on the local machine, which allows to use them together with synproxy to accelerate SYN cookie generation.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com
[...]
+BPF_CALL_4(bpf_tcp_raw_check_syncookie, void *, iph, u32, iph_len,
struct tcphdr *, th, u32, th_len)
+{
[...]
Did you consider just exposing __cookie_v4_check() and __cookie_v6_check()?
No, I didn't, I just implemented it consistently with bpf_tcp_check_syncookie, but let's consider it.
I can't just pass a pointer from BPF without passing the size, so I would need some wrappers around __cookie_v{4,6}_check anyway. The checks for th_len and iph_len would have to stay in the helpers. The check for TCP flags (ACK, !RST, !SYN) could be either in the helper or in BPF. The switch would obviously be gone.
For consideration... those duplicate checks in the runtime that we already could know from verifier side are bothering me.
We could have some new mem types, PTR_TO_IPV4, PTR_TO_IPv6, and PTR_TO_TCP. Then we simplify the helper signatures to just,
bpf_tcp_raw_check_syncookie_v4(iph, tcph); bpf_tcp_raw_check_syncookie_v6(iph, tcph);
And the verifier "knows" what a v4/v6 header is and does the mem check at verification time instead of run time.
Then the code becomes very straightforward,
BPF_CALL_2(bpf_tcp_raw_check_syncookie_v4, void *, iph, void *, th) { u16 mss = tcp_parse_mss_option(th, 0) ?: TCP_MSS_DEFAULT; return __cookie_v4_init_sequence(iph, th, &mss); }
We don't need length checks because we are guaranteed by conmpiler to have valid lengths, assume code is smart enough to understand syn, ack, rst because any real program likely already knows this. And v4/v6 is likely also known by real program already.
If we push a bit more on this mss with PTR_TO_TCP and PTR_TO_IP we can simply mark tcp_parse_mss_option and __cookie_v4_init_sequence and let BPF side call them.
Curious what others think here.
The bottom line is that it would be the same code, but without the switch, and repeated twice. What benefit do you see in this approach? From my side, I only see the ability to drop one branch at the expense of duplicating the code above the switch (th_len and iph_len checks).
My code at least has already run the code above before it would ever call this helper so all the other bits are duplicate.
Sorry, I didn't quite understand this part. What "your code" are you referring to?
Just the XDP parsers we have already switch early on based on v4/v6 and I imagine that most progs also know this. So yes we are arguing about a simple switch, but instruction here and instruction there add up over time. Also passing the size through the helper bothers me slightly given the verifier should know the size already.
This commit adds selftests for the new BPF helpers: bpf_tcp_raw_gen_syncookie and bpf_tcp_raw_check_syncookie.
xdp_synproxy_kern.c is a BPF program that generates SYN cookies on allowed TCP ports and sends SYNACKs to clients, accelerating synproxy iptables module.
xdp_synproxy.c is a userspace control application that allows to configure the following options in runtime: list of allowed ports, MSS, window scale, TTL.
test_xdp_synproxy.sh is a script that demonstrates the setup of synproxy with XDP acceleration and serves as a selftest for the new feature.
Signed-off-by: Maxim Mikityanskiy maximmi@nvidia.com Reviewed-by: Tariq Toukan tariqt@nvidia.com --- tools/testing/selftests/bpf/.gitignore | 1 + tools/testing/selftests/bpf/Makefile | 5 +- .../selftests/bpf/progs/xdp_synproxy_kern.c | 743 ++++++++++++++++++ .../selftests/bpf/test_xdp_synproxy.sh | 71 ++ tools/testing/selftests/bpf/xdp_synproxy.c | 418 ++++++++++ 5 files changed, 1236 insertions(+), 2 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c create mode 100755 tools/testing/selftests/bpf/test_xdp_synproxy.sh create mode 100644 tools/testing/selftests/bpf/xdp_synproxy.c
diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore index 1dad8d617da8..1bcffd73632e 100644 --- a/tools/testing/selftests/bpf/.gitignore +++ b/tools/testing/selftests/bpf/.gitignore @@ -41,3 +41,4 @@ test_cpp *.tmp xdpxceiver xdp_redirect_multi +xdp_synproxy diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 945f92d71db3..8b9bcb2226da 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -72,7 +72,8 @@ TEST_PROGS := test_kmod.sh \ test_bpftool.sh \ test_bpftool_metadata.sh \ test_doc_build.sh \ - test_xsk.sh + test_xsk.sh \ + test_xdp_synproxy.sh
TEST_PROGS_EXTENDED := with_addr.sh \ with_tunnels.sh \ @@ -82,7 +83,7 @@ TEST_PROGS_EXTENDED := with_addr.sh \ TEST_GEN_PROGS_EXTENDED = test_sock_addr test_skb_cgroup_id_user \ flow_dissector_load test_flow_dissector test_tcp_check_syncookie_user \ test_lirc_mode2_user xdping test_cpp runqslower bench bpf_testmod.ko \ - xdpxceiver xdp_redirect_multi + xdpxceiver xdp_redirect_multi xdp_synproxy
TEST_CUSTOM_PROGS = $(OUTPUT)/urandom_read
diff --git a/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c b/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c new file mode 100644 index 000000000000..a45d517e07ef --- /dev/null +++ b/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c @@ -0,0 +1,743 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ + +#include "vmlinux.h" + +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_endian.h> +#include <asm/errno.h> + +#define NSEC_PER_SEC 1000000000L + +#define ETH_ALEN 6 +#define ETH_P_IP 0x0800 +#define ETH_P_IPV6 0x86DD + +#define tcp_flag_word(tp) (((union tcp_word_hdr *)(tp))->words[3]) + +#define IP_DF 0x4000 +#define IP_MF 0x2000 +#define IP_OFFSET 0x1fff + +#define NEXTHDR_TCP 6 + +#define TCPOPT_NOP 1 +#define TCPOPT_EOL 0 +#define TCPOPT_MSS 2 +#define TCPOPT_WINDOW 3 +#define TCPOPT_SACK_PERM 4 +#define TCPOPT_TIMESTAMP 8 + +#define TCPOLEN_MSS 4 +#define TCPOLEN_WINDOW 3 +#define TCPOLEN_SACK_PERM 2 +#define TCPOLEN_TIMESTAMP 10 + +#define TCP_TS_HZ 1000 +#define TS_OPT_WSCALE_MASK 0xf +#define TS_OPT_SACK (1 << 4) +#define TS_OPT_ECN (1 << 5) +#define TSBITS 6 +#define TSMASK (((__u32)1 << TSBITS) - 1) +#define TCP_MAX_WSCALE 14U + +#define IPV4_MAXLEN 60 +#define TCP_MAXLEN 60 + +#define DEFAULT_MSS4 1460 +#define DEFAULT_MSS6 1440 +#define DEFAULT_WSCALE 7 +#define DEFAULT_TTL 64 +#define MAX_ALLOWED_PORTS 8 + +#define swap(a, b) \ + do { typeof(a) __tmp = (a); (a) = (b); (b) = __tmp; } while (0) + +#define __get_unaligned_t(type, ptr) ({ \ + const struct { type x; } __attribute__((__packed__)) *__pptr = (typeof(__pptr))(ptr); \ + __pptr->x; \ +}) + +#define get_unaligned(ptr) __get_unaligned_t(typeof(*(ptr)), (ptr)) + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, __u32); + __type(value, __u64); + __uint(max_entries, 2); +} values SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, __u32); + __type(value, __u16); + __uint(max_entries, MAX_ALLOWED_PORTS); +} allowed_ports SEC(".maps"); + +extern struct nf_conn *bpf_xdp_ct_lookup(struct xdp_md *xdp_ctx, + struct bpf_sock_tuple *bpf_tuple, + __u32 len_tuple, + struct bpf_ct_opts *opts, + __u32 len_opts) __ksym; + +extern void bpf_ct_release(struct nf_conn *ct) __ksym; + +static __always_inline void swap_eth_addr(__u8 *a, __u8 *b) +{ + __u8 tmp[ETH_ALEN]; + + __builtin_memcpy(tmp, a, ETH_ALEN); + __builtin_memcpy(a, b, ETH_ALEN); + __builtin_memcpy(b, tmp, ETH_ALEN); +} + +static __always_inline __u16 csum_fold(__u32 csum) +{ + csum = (csum & 0xffff) + (csum >> 16); + csum = (csum & 0xffff) + (csum >> 16); + return (__u16)~csum; +} + +static __always_inline __u16 csum_tcpudp_magic(__be32 saddr, __be32 daddr, + __u32 len, __u8 proto, + __u32 csum) +{ + __u64 s = csum; + + s += (__u32)saddr; + s += (__u32)daddr; +#if defined(__BIG_ENDIAN__) + s += proto + len; +#elif defined(__LITTLE_ENDIAN__) + s += (proto + len) << 8; +#else +#error Unknown endian +#endif + s = (s & 0xffffffff) + (s >> 32); + s = (s & 0xffffffff) + (s >> 32); + + return csum_fold((__u32)s); +} + +static __always_inline __u16 csum_ipv6_magic(const struct in6_addr *saddr, + const struct in6_addr *daddr, + __u32 len, __u8 proto, __u32 csum) +{ + __u64 sum = csum; + int i; + +#pragma unroll + for (i = 0; i < 4; i++) + sum += (__u32)saddr->in6_u.u6_addr32[i]; + +#pragma unroll + for (i = 0; i < 4; i++) + sum += (__u32)daddr->in6_u.u6_addr32[i]; + + // Don't combine additions to avoid 32-bit overflow. + sum += bpf_htonl(len); + sum += bpf_htonl(proto); + + sum = (sum & 0xffffffff) + (sum >> 32); + sum = (sum & 0xffffffff) + (sum >> 32); + + return csum_fold((__u32)sum); +} + +static __always_inline __u64 tcp_clock_ns(void) +{ + return bpf_ktime_get_ns(); +} + +static __always_inline __u32 tcp_ns_to_ts(__u64 ns) +{ + return ns / (NSEC_PER_SEC / TCP_TS_HZ); +} + +static __always_inline __u32 tcp_time_stamp_raw(void) +{ + return tcp_ns_to_ts(tcp_clock_ns()); +} + +struct tcpopt_context { + __u8 *ptr; + __u8 *end; + void *data_end; + __be32 *tsecr; + __u8 wscale; + bool option_timestamp; + bool option_sack; +}; + +static int tscookie_tcpopt_parse(struct tcpopt_context *ctx) +{ + __u8 opcode, opsize; + + if (ctx->ptr >= ctx->end) + return 1; + if (ctx->ptr >= ctx->data_end) + return 1; + + opcode = ctx->ptr[0]; + + if (opcode == TCPOPT_EOL) + return 1; + if (opcode == TCPOPT_NOP) { + ++ctx->ptr; + return 0; + } + + if (ctx->ptr + 1 >= ctx->end) + return 1; + if (ctx->ptr + 1 >= ctx->data_end) + return 1; + opsize = ctx->ptr[1]; + if (opsize < 2) + return 1; + + if (ctx->ptr + opsize > ctx->end) + return 1; + + switch (opcode) { + case TCPOPT_WINDOW: + if (opsize == TCPOLEN_WINDOW && ctx->ptr + TCPOLEN_WINDOW <= ctx->data_end) + ctx->wscale = ctx->ptr[2] < TCP_MAX_WSCALE ? ctx->ptr[2] : TCP_MAX_WSCALE; + break; + case TCPOPT_TIMESTAMP: + if (opsize == TCPOLEN_TIMESTAMP && ctx->ptr + TCPOLEN_TIMESTAMP <= ctx->data_end) { + ctx->option_timestamp = true; + /* Client's tsval becomes our tsecr. */ + *ctx->tsecr = get_unaligned((__be32 *)(ctx->ptr + 2)); + } + break; + case TCPOPT_SACK_PERM: + if (opsize == TCPOLEN_SACK_PERM) + ctx->option_sack = true; + break; + } + + ctx->ptr += opsize; + + return 0; +} + +static int tscookie_tcpopt_parse_batch(__u32 index, void *context) +{ + int i; + + for (i = 0; i < 7; i++) + if (tscookie_tcpopt_parse(context)) + return 1; + return 0; +} + +static __always_inline bool tscookie_init(struct tcphdr *tcp_header, + __u16 tcp_len, __be32 *tsval, + __be32 *tsecr, void *data_end) +{ + struct tcpopt_context loop_ctx = { + .ptr = (__u8 *)(tcp_header + 1), + .end = (__u8 *)tcp_header + tcp_len, + .data_end = data_end, + .tsecr = tsecr, + .wscale = TS_OPT_WSCALE_MASK, + .option_timestamp = false, + .option_sack = false, + }; + u32 cookie; + + bpf_loop(6, tscookie_tcpopt_parse_batch, &loop_ctx, 0); + + if (!loop_ctx.option_timestamp) + return false; + + cookie = tcp_time_stamp_raw() & ~TSMASK; + cookie |= loop_ctx.wscale & TS_OPT_WSCALE_MASK; + if (loop_ctx.option_sack) + cookie |= TS_OPT_SACK; + if (tcp_header->ece && tcp_header->cwr) + cookie |= TS_OPT_ECN; + *tsval = bpf_htonl(cookie); + + return true; +} + +static __always_inline void values_get_tcpipopts(__u16 *mss, __u8 *wscale, + __u8 *ttl, bool ipv6) +{ + __u32 key = 0; + __u64 *value; + + value = bpf_map_lookup_elem(&values, &key); + if (value && *value != 0) { + if (ipv6) + *mss = (*value >> 32) & 0xffff; + else + *mss = *value & 0xffff; + *wscale = (*value >> 16) & 0xf; + *ttl = (*value >> 24) & 0xff; + return; + } + + *mss = ipv6 ? DEFAULT_MSS6 : DEFAULT_MSS4; + *wscale = DEFAULT_WSCALE; + *ttl = DEFAULT_TTL; +} + +static __always_inline void values_inc_synacks(void) +{ + __u32 key = 1; + __u32 *value; + + value = bpf_map_lookup_elem(&values, &key); + if (value) + __sync_fetch_and_add(value, 1); +} + +static __always_inline bool check_port_allowed(__u16 port) +{ + __u32 i; + + for (i = 0; i < MAX_ALLOWED_PORTS; i++) { + __u32 key = i; + __u16 *value; + + value = bpf_map_lookup_elem(&allowed_ports, &key); + + if (!value) + break; + // 0 is a terminator value. Check it first to avoid matching on + // a forbidden port == 0 and returning true. + if (*value == 0) + break; + + if (*value == port) + return true; + } + + return false; +} + +struct header_pointers { + struct ethhdr *eth; + struct iphdr *ipv4; + struct ipv6hdr *ipv6; + struct tcphdr *tcp; + __u16 tcp_len; +}; + +static __always_inline int tcp_dissect(void *data, void *data_end, + struct header_pointers *hdr) +{ + hdr->eth = data; + if (hdr->eth + 1 > data_end) + return XDP_DROP; + + switch (bpf_ntohs(hdr->eth->h_proto)) { + case ETH_P_IP: + hdr->ipv6 = NULL; + + hdr->ipv4 = (void *)hdr->eth + sizeof(*hdr->eth); + if (hdr->ipv4 + 1 > data_end) + return XDP_DROP; + if (hdr->ipv4->ihl * 4 < sizeof(*hdr->ipv4)) + return XDP_DROP; + if (hdr->ipv4->version != 4) + return XDP_DROP; + + if (hdr->ipv4->protocol != IPPROTO_TCP) + return XDP_PASS; + + hdr->tcp = (void *)hdr->ipv4 + hdr->ipv4->ihl * 4; + break; + case ETH_P_IPV6: + hdr->ipv4 = NULL; + + hdr->ipv6 = (void *)hdr->eth + sizeof(*hdr->eth); + if (hdr->ipv6 + 1 > data_end) + return XDP_DROP; + if (hdr->ipv6->version != 6) + return XDP_DROP; + + // XXX: Extension headers are not supported and could circumvent + // XDP SYN flood protection. + if (hdr->ipv6->nexthdr != NEXTHDR_TCP) + return XDP_PASS; + + hdr->tcp = (void *)hdr->ipv6 + sizeof(*hdr->ipv6); + break; + default: + // XXX: VLANs will circumvent XDP SYN flood protection. + return XDP_PASS; + } + + if (hdr->tcp + 1 > data_end) + return XDP_DROP; + hdr->tcp_len = hdr->tcp->doff * 4; + if (hdr->tcp_len < sizeof(*hdr->tcp)) + return XDP_DROP; + + return XDP_TX; +} + +static __always_inline int tcp_lookup(struct xdp_md *ctx, struct header_pointers *hdr) +{ + struct bpf_ct_opts ct_lookup_opts = { + .netns_id = BPF_F_CURRENT_NETNS, + .l4proto = IPPROTO_TCP, + }; + struct bpf_sock_tuple tup = {}; + struct nf_conn *ct; + __u32 tup_size; + + if (hdr->ipv4) { + // TCP doesn't normally use fragments, and XDP can't reassemble them. + if ((hdr->ipv4->frag_off & bpf_htons(IP_DF | IP_MF | IP_OFFSET)) != bpf_htons(IP_DF)) + return XDP_DROP; + + tup.ipv4.saddr = hdr->ipv4->saddr; + tup.ipv4.daddr = hdr->ipv4->daddr; + tup.ipv4.sport = hdr->tcp->source; + tup.ipv4.dport = hdr->tcp->dest; + tup_size = sizeof(tup.ipv4); + } else if (hdr->ipv6) { + __builtin_memcpy(tup.ipv6.saddr, &hdr->ipv6->saddr, sizeof(tup.ipv6.saddr)); + __builtin_memcpy(tup.ipv6.daddr, &hdr->ipv6->daddr, sizeof(tup.ipv6.daddr)); + tup.ipv6.sport = hdr->tcp->source; + tup.ipv6.dport = hdr->tcp->dest; + tup_size = sizeof(tup.ipv6); + } else { + // The verifier can't track that either ipv4 or ipv6 is not NULL. + return XDP_ABORTED; + } + ct = bpf_xdp_ct_lookup(ctx, &tup, tup_size, &ct_lookup_opts, sizeof(ct_lookup_opts)); + if (ct) { + unsigned long status = ct->status; + + bpf_ct_release(ct); + if (status & IPS_CONFIRMED_BIT) + return XDP_PASS; + } else if (ct_lookup_opts.error != -ENOENT) { + return XDP_ABORTED; + } + + // error == -ENOENT || !(status & IPS_CONFIRMED_BIT) + return XDP_TX; +} + +static __always_inline __u8 tcp_mkoptions(__be32 *buf, __be32 *tsopt, __u16 mss, + __u8 wscale) +{ + __be32 *start = buf; + + *buf++ = bpf_htonl((TCPOPT_MSS << 24) | (TCPOLEN_MSS << 16) | mss); + + if (!tsopt) + return buf - start; + + if (tsopt[0] & bpf_htonl(1 << 4)) + *buf++ = bpf_htonl((TCPOPT_SACK_PERM << 24) | + (TCPOLEN_SACK_PERM << 16) | + (TCPOPT_TIMESTAMP << 8) | + TCPOLEN_TIMESTAMP); + else + *buf++ = bpf_htonl((TCPOPT_NOP << 24) | + (TCPOPT_NOP << 16) | + (TCPOPT_TIMESTAMP << 8) | + TCPOLEN_TIMESTAMP); + *buf++ = tsopt[0]; + *buf++ = tsopt[1]; + + if ((tsopt[0] & bpf_htonl(0xf)) != bpf_htonl(0xf)) + *buf++ = bpf_htonl((TCPOPT_NOP << 24) | + (TCPOPT_WINDOW << 16) | + (TCPOLEN_WINDOW << 8) | + wscale); + + return buf - start; +} + +static __always_inline void tcp_gen_synack(struct tcphdr *tcp_header, + __u32 cookie, __be32 *tsopt, + __u16 mss, __u8 wscale) +{ + void *tcp_options; + + tcp_flag_word(tcp_header) = TCP_FLAG_SYN | TCP_FLAG_ACK; + if (tsopt && (tsopt[0] & bpf_htonl(1 << 5))) + tcp_flag_word(tcp_header) |= TCP_FLAG_ECE; + tcp_header->doff = 5; // doff is part of tcp_flag_word. + swap(tcp_header->source, tcp_header->dest); + tcp_header->ack_seq = bpf_htonl(bpf_ntohl(tcp_header->seq) + 1); + tcp_header->seq = bpf_htonl(cookie); + tcp_header->window = 0; + tcp_header->urg_ptr = 0; + tcp_header->check = 0; // Rely on hardware checksum offload. + + tcp_options = (void *)(tcp_header + 1); + tcp_header->doff += tcp_mkoptions(tcp_options, tsopt, mss, wscale); +} + +static __always_inline void tcpv4_gen_synack(struct header_pointers *hdr, + __u32 cookie, __be32 *tsopt) +{ + __u8 wscale; + __u16 mss; + __u8 ttl; + + values_get_tcpipopts(&mss, &wscale, &ttl, false); + + swap_eth_addr(hdr->eth->h_source, hdr->eth->h_dest); + + swap(hdr->ipv4->saddr, hdr->ipv4->daddr); + hdr->ipv4->check = 0; // Rely on hardware checksum offload. + hdr->ipv4->tos = 0; + hdr->ipv4->id = 0; + hdr->ipv4->ttl = ttl; + + tcp_gen_synack(hdr->tcp, cookie, tsopt, mss, wscale); + + hdr->tcp_len = hdr->tcp->doff * 4; + hdr->ipv4->tot_len = bpf_htons(sizeof(*hdr->ipv4) + hdr->tcp_len); +} + +static __always_inline void tcpv6_gen_synack(struct header_pointers *hdr, + __u32 cookie, __be32 *tsopt) +{ + __u8 wscale; + __u16 mss; + __u8 ttl; + + values_get_tcpipopts(&mss, &wscale, &ttl, true); + + swap_eth_addr(hdr->eth->h_source, hdr->eth->h_dest); + + swap(hdr->ipv6->saddr, hdr->ipv6->daddr); + *(__be32 *)hdr->ipv6 = bpf_htonl(0x60000000); + hdr->ipv6->hop_limit = ttl; + + tcp_gen_synack(hdr->tcp, cookie, tsopt, mss, wscale); + + hdr->tcp_len = hdr->tcp->doff * 4; + hdr->ipv6->payload_len = bpf_htons(hdr->tcp_len); +} + +static __always_inline int syncookie_handle_syn(struct header_pointers *hdr, + struct xdp_md *ctx, + void *data, void *data_end) +{ + __u32 old_pkt_size, new_pkt_size; + // Unlike clang 10, clang 11 and 12 generate code that doesn't pass the + // BPF verifier if tsopt is not volatile. Volatile forces it to store + // the pointer value and use it directly, otherwise tcp_mkoptions is + // (mis)compiled like this: + // if (!tsopt) + // return buf - start; + // reg = stored_return_value_of_bpf_tcp_raw_gen_tscookie; + // if (reg == 0) + // tsopt = tsopt_buf; + // else + // tsopt = NULL; + // ... + // *buf++ = tsopt[1]; + // It creates a dead branch where tsopt is assigned NULL, but the + // verifier can't prove it's dead and blocks the program. + __be32 * volatile tsopt = NULL; + __be32 tsopt_buf[2] = {}; + void *ip_header; + __u16 ip_len; + __u32 cookie; + __s64 value; + + if (hdr->ipv4) { + // Check the IPv4 and TCP checksums before creating a SYNACK. + value = bpf_csum_diff(0, 0, (void *)hdr->ipv4, hdr->ipv4->ihl * 4, 0); + if (value < 0) + return XDP_ABORTED; + if (csum_fold(value) != 0) + return XDP_DROP; // Bad IPv4 checksum. + + value = bpf_csum_diff(0, 0, (void *)hdr->tcp, hdr->tcp_len, 0); + if (value < 0) + return XDP_ABORTED; + if (csum_tcpudp_magic(hdr->ipv4->saddr, hdr->ipv4->daddr, + hdr->tcp_len, IPPROTO_TCP, value) != 0) + return XDP_DROP; // Bad TCP checksum. + + ip_header = hdr->ipv4; + ip_len = sizeof(*hdr->ipv4); + } else if (hdr->ipv6) { + // Check the TCP checksum before creating a SYNACK. + value = bpf_csum_diff(0, 0, (void *)hdr->tcp, hdr->tcp_len, 0); + if (value < 0) + return XDP_ABORTED; + if (csum_ipv6_magic(&hdr->ipv6->saddr, &hdr->ipv6->daddr, + hdr->tcp_len, IPPROTO_TCP, value) != 0) + return XDP_DROP; // Bad TCP checksum. + + ip_header = hdr->ipv6; + ip_len = sizeof(*hdr->ipv6); + } else { + return XDP_ABORTED; + } + + // Issue SYN cookies on allowed ports, drop SYN packets on + // blocked ports. + if (!check_port_allowed(bpf_ntohs(hdr->tcp->dest))) + return XDP_DROP; + + value = bpf_tcp_raw_gen_syncookie(ip_header, ip_len, + (void *)hdr->tcp, hdr->tcp_len); + if (value < 0) + return XDP_ABORTED; + cookie = (__u32)value; + + if (tscookie_init((void *)hdr->tcp, hdr->tcp_len, + &tsopt_buf[0], &tsopt_buf[1], data_end)) + tsopt = tsopt_buf; + + // Check that there is enough space for a SYNACK. It also covers + // the check that the destination of the __builtin_memmove below + // doesn't overflow. + if (data + sizeof(*hdr->eth) + ip_len + TCP_MAXLEN > data_end) + return XDP_ABORTED; + + if (hdr->ipv4) { + if (hdr->ipv4->ihl * 4 > sizeof(*hdr->ipv4)) { + struct tcphdr *new_tcp_header; + + new_tcp_header = data + sizeof(*hdr->eth) + sizeof(*hdr->ipv4); + __builtin_memmove(new_tcp_header, hdr->tcp, sizeof(*hdr->tcp)); + hdr->tcp = new_tcp_header; + + hdr->ipv4->ihl = sizeof(*hdr->ipv4) / 4; + } + + tcpv4_gen_synack(hdr, cookie, tsopt); + } else if (hdr->ipv6) { + tcpv6_gen_synack(hdr, cookie, tsopt); + } else { + return XDP_ABORTED; + } + + // Recalculate checksums. + hdr->tcp->check = 0; + value = bpf_csum_diff(0, 0, (void *)hdr->tcp, hdr->tcp_len, 0); + if (value < 0) + return XDP_ABORTED; + if (hdr->ipv4) { + hdr->tcp->check = csum_tcpudp_magic(hdr->ipv4->saddr, + hdr->ipv4->daddr, + hdr->tcp_len, + IPPROTO_TCP, + value); + + hdr->ipv4->check = 0; + value = bpf_csum_diff(0, 0, (void *)hdr->ipv4, sizeof(*hdr->ipv4), 0); + if (value < 0) + return XDP_ABORTED; + hdr->ipv4->check = csum_fold(value); + } else if (hdr->ipv6) { + hdr->tcp->check = csum_ipv6_magic(&hdr->ipv6->saddr, + &hdr->ipv6->daddr, + hdr->tcp_len, + IPPROTO_TCP, + value); + } else { + return XDP_ABORTED; + } + + // Set the new packet size. + old_pkt_size = data_end - data; + new_pkt_size = sizeof(*hdr->eth) + ip_len + hdr->tcp->doff * 4; + if (bpf_xdp_adjust_tail(ctx, new_pkt_size - old_pkt_size)) + return XDP_ABORTED; + + values_inc_synacks(); + + return XDP_TX; +} + +static __always_inline int syncookie_handle_ack(struct header_pointers *hdr) +{ + int err; + + if (hdr->ipv4) + err = bpf_tcp_raw_check_syncookie(hdr->ipv4, sizeof(*hdr->ipv4), + (void *)hdr->tcp, hdr->tcp_len); + else if (hdr->ipv6) + err = bpf_tcp_raw_check_syncookie(hdr->ipv6, sizeof(*hdr->ipv6), + (void *)hdr->tcp, hdr->tcp_len); + else + return XDP_ABORTED; + if (err) + return XDP_DROP; + + return XDP_PASS; +} + +SEC("xdp/syncookie") +int syncookie_xdp(struct xdp_md *ctx) +{ + void *data_end = (void *)(long)ctx->data_end; + void *data = (void *)(long)ctx->data; + struct header_pointers hdr; + __s64 value; + int ret; + + struct bpf_ct_opts ct_lookup_opts = { + .netns_id = BPF_F_CURRENT_NETNS, + .l4proto = IPPROTO_TCP, + }; + + ret = tcp_dissect(data, data_end, &hdr); + if (ret != XDP_TX) + return ret; + + ret = tcp_lookup(ctx, &hdr); + if (ret != XDP_TX) + return ret; + + // Packet is TCP and doesn't belong to an established connection. + + if ((hdr.tcp->syn ^ hdr.tcp->ack) != 1) + return XDP_DROP; + + // Grow the TCP header to TCP_MAXLEN to be able to pass any hdr.tcp_len + // to bpf_tcp_raw_gen_syncookie and pass the verifier. + if (bpf_xdp_adjust_tail(ctx, TCP_MAXLEN - hdr.tcp_len)) + return XDP_ABORTED; + + data_end = (void *)(long)ctx->data_end; + data = (void *)(long)ctx->data; + + if (hdr.ipv4) { + hdr.eth = data; + hdr.ipv4 = (void *)hdr.eth + sizeof(*hdr.eth); + // IPV4_MAXLEN is needed when calculating checksum. + // At least sizeof(struct iphdr) is needed here to access ihl. + if ((void *)hdr.ipv4 + IPV4_MAXLEN > data_end) + return XDP_ABORTED; + hdr.tcp = (void *)hdr.ipv4 + hdr.ipv4->ihl * 4; + } else if (hdr.ipv6) { + hdr.eth = data; + hdr.ipv6 = (void *)hdr.eth + sizeof(*hdr.eth); + hdr.tcp = (void *)hdr.ipv6 + sizeof(*hdr.ipv6); + } else { + return XDP_ABORTED; + } + + if ((void *)hdr.tcp + TCP_MAXLEN > data_end) + return XDP_ABORTED; + + // We run out of registers, tcp_len gets spilled to the stack, and the + // verifier forgets its min and max values checked above in tcp_dissect. + hdr.tcp_len = hdr.tcp->doff * 4; + if (hdr.tcp_len < sizeof(*hdr.tcp)) + return XDP_ABORTED; + + return hdr.tcp->syn ? syncookie_handle_syn(&hdr, ctx, data, data_end) : + syncookie_handle_ack(&hdr); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/test_xdp_synproxy.sh b/tools/testing/selftests/bpf/test_xdp_synproxy.sh new file mode 100755 index 000000000000..8f2811983212 --- /dev/null +++ b/tools/testing/selftests/bpf/test_xdp_synproxy.sh @@ -0,0 +1,71 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. + +set -e + +PORT=8080 + +DIR="$(dirname "$0")" +SERVER_PID= + +fail() { + echo 'test_xdp_synproxy: [FAIL]' + exit 1 +} + +cleanup() { + set +e + + ip link del tmp0 + ip netns del synproxy + + # Kill background jobs, if any. + kill "$(jobs -p)" +} + +trap cleanup SIGINT SIGTERM EXIT + +modprobe nf_conntrack +ip netns add synproxy +ip netns exec synproxy ip link set lo up +ip link add tmp0 type veth peer name tmp1 +sleep 1 # Wait, otherwise the IP address is not applied to tmp0. +ip link set tmp1 netns synproxy +ip link set tmp0 up +ip addr replace 198.18.0.1/24 dev tmp0 +ip netns exec synproxy ip link set tmp1 up +ip netns exec synproxy ip addr replace 198.18.0.2/24 dev tmp1 +ip netns exec synproxy sysctl -w net.ipv4.tcp_syncookies=2 +ip netns exec synproxy sysctl -w net.ipv4.tcp_timestamps=1 +ip netns exec synproxy sysctl -w net.netfilter.nf_conntrack_tcp_loose=0 +ip netns exec synproxy iptables -t raw -I PREROUTING \ + -i tmp1 -p tcp -m tcp --syn --dport "$PORT" -j CT --notrack +ip netns exec synproxy iptables -t filter -A INPUT \ + -i tmp1 -p tcp -m tcp --dport "$PORT" -m state --state INVALID,UNTRACKED \ + -j SYNPROXY --sack-perm --timestamp --wscale 7 --mss 1460 +ip netns exec synproxy iptables -t filter -A INPUT \ + -i tmp1 -m state --state INVALID -j DROP +# When checksum offload is enabled, the XDP program sees wrong checksums and +# drops packets. +ethtool -K tmp0 tx off +# Workaround required for veth. +ip link set tmp0 xdp object "$DIR/xdp_dummy.o" section xdp 2> /dev/null + +SYNACKS="$(ip netns exec synproxy "$DIR/xdp_synproxy" --iface tmp1 \ + --ports "$PORT" --mss4 1460 --mss6 1440 --wscale 7 --ttl 64 --single | \ + cut -d: -f2)" +[ "$SYNACKS" -eq 0 ] || fail + +# Different nc implementations accept different parameters. +{ ip netns exec synproxy nc -l -p "$PORT" || ip netns exec synproxy nc -l "$PORT"; } & +SERVER_PID="$!" +sleep 1 # Wait for the server to start. + +echo -n > /dev/tcp/198.18.0.2/"$PORT" + +SYNACKS="$(ip netns exec synproxy "$DIR/xdp_synproxy" --iface tmp1 --single | \ + cut -d: -f2)" +[ "$SYNACKS" -eq 1 ] || fail + +echo 'test_xdp_synproxy: ok' diff --git a/tools/testing/selftests/bpf/xdp_synproxy.c b/tools/testing/selftests/bpf/xdp_synproxy.c new file mode 100644 index 000000000000..e27e86a82a4f --- /dev/null +++ b/tools/testing/selftests/bpf/xdp_synproxy.c @@ -0,0 +1,418 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */ + +#include <stdnoreturn.h> +#include <stdlib.h> +#include <stdio.h> +#include <string.h> +#include <errno.h> +#include <unistd.h> +#include <getopt.h> +#include <signal.h> +#include <sys/types.h> +#include <bpf/bpf.h> +#include <bpf/libbpf.h> +#include <net/if.h> +#include <linux/if_link.h> +#include <linux/limits.h> + +static unsigned int ifindex; +static __u32 attached_prog_id; + +static void noreturn cleanup(int sig) +{ + DECLARE_LIBBPF_OPTS(bpf_xdp_set_link_opts, opts); + int prog_fd; + int err; + + if (attached_prog_id == 0) + exit(0); + + prog_fd = bpf_prog_get_fd_by_id(attached_prog_id); + if (prog_fd < 0) { + fprintf(stderr, "Error: bpf_prog_get_fd_by_id: %s\n", strerror(-prog_fd)); + err = bpf_set_link_xdp_fd(ifindex, -1, 0); + if (err < 0) { + fprintf(stderr, "Error: bpf_set_link_xdp_fd: %s\n", strerror(-err)); + fprintf(stderr, "Failed to detach XDP program\n"); + exit(1); + } + } else { + opts.old_fd = prog_fd; + err = bpf_set_link_xdp_fd_opts(ifindex, -1, XDP_FLAGS_REPLACE, &opts); + close(prog_fd); + if (err < 0) { + fprintf(stderr, "Error: bpf_set_link_xdp_fd_opts: %s\n", strerror(-err)); + // Not an error if already replaced by someone else. + if (err != -EEXIST) { + fprintf(stderr, "Failed to detach XDP program\n"); + exit(1); + } + } + } + exit(0); +} + +static noreturn void usage(const char *progname) +{ + fprintf(stderr, "Usage: %s [--iface <iface>|--prog <prog_id>] [--mss4 <mss ipv4> --mss6 <mss ipv6> --wscale <wscale> --ttl <ttl>] [--ports <port1>,<port2>,...] [--single]\n", + progname); + exit(1); +} + +static unsigned long parse_arg_ul(const char *progname, const char *arg, unsigned long limit) +{ + unsigned long res; + char *endptr; + + errno = 0; + res = strtoul(arg, &endptr, 10); + if (errno != 0 || *endptr != '\0' || arg[0] == '\0' || res > limit) + usage(progname); + + return res; +} + +static void parse_options(int argc, char *argv[], unsigned int *ifindex, __u32 *prog_id, + __u64 *tcpipopts, char **ports, bool *single) +{ + static struct option long_options[] = { + { "help", no_argument, NULL, 'h' }, + { "iface", required_argument, NULL, 'i' }, + { "prog", required_argument, NULL, 'x' }, + { "mss4", required_argument, NULL, 4 }, + { "mss6", required_argument, NULL, 6 }, + { "wscale", required_argument, NULL, 'w' }, + { "ttl", required_argument, NULL, 't' }, + { "ports", required_argument, NULL, 'p' }, + { "single", no_argument, NULL, 's' }, + { NULL, 0, NULL, 0 }, + }; + unsigned long mss4, mss6, wscale, ttl; + unsigned int tcpipopts_mask = 0; + + if (argc < 2) + usage(argv[0]); + + *ifindex = 0; + *prog_id = 0; + *tcpipopts = 0; + *ports = NULL; + *single = false; + + while (true) { + int opt; + + opt = getopt_long(argc, argv, "", long_options, NULL); + if (opt == -1) + break; + + switch (opt) { + case 'h': + usage(argv[0]); + break; + case 'i': + *ifindex = if_nametoindex(optarg); + if (*ifindex == 0) + usage(argv[0]); + break; + case 'x': + *prog_id = parse_arg_ul(argv[0], optarg, UINT32_MAX); + if (*prog_id == 0) + usage(argv[0]); + break; + case 4: + mss4 = parse_arg_ul(argv[0], optarg, UINT16_MAX); + tcpipopts_mask |= 1 << 0; + break; + case 6: + mss6 = parse_arg_ul(argv[0], optarg, UINT16_MAX); + tcpipopts_mask |= 1 << 1; + break; + case 'w': + wscale = parse_arg_ul(argv[0], optarg, 14); + tcpipopts_mask |= 1 << 2; + break; + case 't': + ttl = parse_arg_ul(argv[0], optarg, UINT8_MAX); + tcpipopts_mask |= 1 << 3; + break; + case 'p': + *ports = optarg; + break; + case 's': + *single = true; + break; + default: + usage(argv[0]); + } + } + if (optind < argc) + usage(argv[0]); + + if (tcpipopts_mask == 0xf) { + if (mss4 == 0 || mss6 == 0 || wscale == 0 || ttl == 0) + usage(argv[0]); + *tcpipopts = (mss6 << 32) | (ttl << 24) | (wscale << 16) | mss4; + } else if (tcpipopts_mask != 0) { + usage(argv[0]); + } + + if (*ifindex != 0 && *prog_id != 0) + usage(argv[0]); + if (*ifindex == 0 && *prog_id == 0) + usage(argv[0]); +} + +static int syncookie_attach(const char *argv0, unsigned int ifindex) +{ + struct bpf_prog_info info = {}; + __u32 info_len = sizeof(info); + char xdp_filename[PATH_MAX]; + struct bpf_program *prog; + struct bpf_object *obj; + int prog_fd; + int err; + + snprintf(xdp_filename, sizeof(xdp_filename), "%s_kern.o", argv0); + obj = bpf_object__open_file(xdp_filename, NULL); + err = libbpf_get_error(obj); + if (err < 0) { + fprintf(stderr, "Error: bpf_object__open_file: %s\n", strerror(-err)); + return err; + } + + err = bpf_object__load(obj); + if (err < 0) { + fprintf(stderr, "Error: bpf_object__open_file: %s\n", strerror(-err)); + return err; + } + + prog = bpf_object__find_program_by_name(obj, "syncookie_xdp"); + if (!prog) { + fprintf(stderr, "Error: bpf_object__find_program_by_name: program syncookie_xdp was not found\n"); + return -ENOENT; + } + + prog_fd = bpf_program__fd(prog); + + err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len); + if (err < 0) { + fprintf(stderr, "Error: bpf_obj_get_info_by_fd: %s\n", strerror(-err)); + goto out; + } + attached_prog_id = info.id; + signal(SIGINT, cleanup); + signal(SIGTERM, cleanup); + err = bpf_set_link_xdp_fd(ifindex, prog_fd, XDP_FLAGS_UPDATE_IF_NOEXIST); + if (err < 0) { + fprintf(stderr, "Error: bpf_set_link_xdp_fd: %s\n", strerror(-err)); + signal(SIGINT, SIG_DFL); + signal(SIGTERM, SIG_DFL); + attached_prog_id = 0; + goto out; + } + err = 0; +out: + bpf_object__close(obj); + return err; +} + +static int syncookie_open_bpf_maps(__u32 prog_id, int *values_map_fd, int *ports_map_fd) +{ + struct bpf_prog_info prog_info; + __u32 map_ids[8]; + __u32 info_len; + int prog_fd; + int err; + int i; + + *values_map_fd = -1; + *ports_map_fd = -1; + + prog_fd = bpf_prog_get_fd_by_id(prog_id); + if (prog_fd < 0) { + fprintf(stderr, "Error: bpf_prog_get_fd_by_id: %s\n", strerror(-prog_fd)); + return prog_fd; + } + + prog_info = (struct bpf_prog_info) { + .nr_map_ids = 8, + .map_ids = (__u64)map_ids, + }; + info_len = sizeof(prog_info); + + err = bpf_obj_get_info_by_fd(prog_fd, &prog_info, &info_len); + if (err != 0) { + fprintf(stderr, "Error: bpf_obj_get_info_by_fd: %s\n", strerror(-err)); + goto out; + } + + if (prog_info.type != BPF_PROG_TYPE_XDP) { + fprintf(stderr, "Error: BPF prog type is not BPF_PROG_TYPE_XDP\n"); + err = -ENOENT; + goto out; + } + if (prog_info.nr_map_ids < 2) { + fprintf(stderr, "Error: Found %u BPF maps, expected at least 2\n", + prog_info.nr_map_ids); + err = -ENOENT; + goto out; + } + + for (i = 0; i < prog_info.nr_map_ids; i++) { + struct bpf_map_info map_info = {}; + int map_fd; + + err = bpf_map_get_fd_by_id(map_ids[i]); + if (err < 0) { + fprintf(stderr, "Error: bpf_map_get_fd_by_id: %s\n", strerror(-err)); + goto err_close_map_fds; + } + map_fd = err; + + info_len = sizeof(map_info); + err = bpf_obj_get_info_by_fd(map_fd, &map_info, &info_len); + if (err != 0) { + fprintf(stderr, "Error: bpf_obj_get_info_by_fd: %s\n", strerror(-err)); + close(map_fd); + goto err_close_map_fds; + } + if (strcmp(map_info.name, "values") == 0) { + *values_map_fd = map_fd; + continue; + } + if (strcmp(map_info.name, "allowed_ports") == 0) { + *ports_map_fd = map_fd; + continue; + } + close(map_fd); + } + + if (*values_map_fd != -1 && *ports_map_fd != -1) { + err = 0; + goto out; + } + + err = -ENOENT; + +err_close_map_fds: + if (*values_map_fd != -1) + close(*values_map_fd); + if (*ports_map_fd != -1) + close(*ports_map_fd); + *values_map_fd = -1; + *ports_map_fd = -1; + +out: + close(prog_fd); + return err; +} + +int main(int argc, char *argv[]) +{ + int values_map_fd, ports_map_fd; + __u64 tcpipopts; + bool firstiter; + __u64 prevcnt; + __u32 prog_id; + char *ports; + bool single; + int err = 0; + + parse_options(argc, argv, &ifindex, &prog_id, &tcpipopts, &ports, &single); + + if (prog_id == 0) { + err = bpf_get_link_xdp_id(ifindex, &prog_id, 0); + if (err < 0) { + fprintf(stderr, "Error: bpf_get_link_xdp_id: %s\n", strerror(-err)); + goto out; + } + if (prog_id == 0) { + err = syncookie_attach(argv[0], ifindex); + if (err < 0) + goto out; + prog_id = attached_prog_id; + } + } + + err = syncookie_open_bpf_maps(prog_id, &values_map_fd, &ports_map_fd); + if (err < 0) + goto out; + + if (ports) { + __u16 port_last = 0; + __u32 port_idx = 0; + char *p = ports; + + fprintf(stderr, "Replacing allowed ports\n"); + + while (p && *p != '\0') { + char *token = strsep(&p, ","); + __u16 port; + + port = parse_arg_ul(argv[0], token, UINT16_MAX); + err = bpf_map_update_elem(ports_map_fd, &port_idx, &port, BPF_ANY); + if (err != 0) { + fprintf(stderr, "Error: bpf_map_update_elem: %s\n", strerror(-err)); + fprintf(stderr, "Failed to add port %u (index %u)\n", + port, port_idx); + goto out_close_maps; + } + fprintf(stderr, "Added port %u\n", port); + port_idx++; + } + err = bpf_map_update_elem(ports_map_fd, &port_idx, &port_last, BPF_ANY); + if (err != 0) { + fprintf(stderr, "Error: bpf_map_update_elem: %s\n", strerror(-err)); + fprintf(stderr, "Failed to add the terminator value 0 (index %u)\n", + port_idx); + goto out_close_maps; + } + } + + if (tcpipopts) { + __u32 key = 0; + + fprintf(stderr, "Replacing TCP/IP options\n"); + + err = bpf_map_update_elem(values_map_fd, &key, &tcpipopts, BPF_ANY); + if (err != 0) { + fprintf(stderr, "Error: bpf_map_update_elem: %s\n", strerror(-err)); + goto out_close_maps; + } + } + + if ((ports || tcpipopts) && attached_prog_id == 0 && !single) + goto out_close_maps; + + prevcnt = 0; + firstiter = true; + while (true) { + __u32 key = 1; + __u64 value; + + err = bpf_map_lookup_elem(values_map_fd, &key, &value); + if (err != 0) { + fprintf(stderr, "Error: bpf_map_lookup_elem: %s\n", strerror(-err)); + goto out_close_maps; + } + if (firstiter) { + prevcnt = value; + firstiter = false; + } + if (single) { + printf("Total SYNACKs generated: %llu\n", value); + break; + } + printf("SYNACKs generated: %llu (total %llu)\n", value - prevcnt, value); + prevcnt = value; + sleep(1); + } + +out_close_maps: + close(values_map_fd); + close(ports_map_fd); +out: + return err == 0 ? 0 : 1; +}
linux-kselftest-mirror@lists.linaro.org