From: Jakub Kicinski kuba@kernel.org
When ur_load_imm_any() is inlined into jeq_imm(), it's possible for the compiler to deduce a case where _val can only have the value of -1 at compile time. Specifically,
/* struct bpf_insn: _s32 imm */ u64 imm = insn->imm; /* sign extend */ if (imm >> 32) { /* non-zero only if insn->imm is negative */ /* inlined from ur_load_imm_any */ u32 __imm = imm >> 32; /* therefore, always 0xffffffff */ if (__builtin_constant_p(__imm) && __imm > 255) compiletime_assert_XXX()
This can result in tripping a BUILD_BUG_ON() in __BF_FIELD_CHECK() that checks that a given value is representable in one byte (interpreted as unsigned).
FIELD_FIT() should return true or false at runtime for whether a value can fit for not. Don't break the build over a value that's too large for the mask. We'd prefer to keep the inlining and compiler optimizations though we know this case will always return false.
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/kernel-hardening/CAK7LNASvb0UDJ0U5wkYYRzTAdnEs64HjXp... Reported-by: Masahiro Yamada masahiroy@kernel.org Debugged-by: Sami Tolvanen samitolvanen@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Nick Desaulniers ndesaulniers@google.com --- include/linux/bitfield.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h index 48ea093ff04c..4e035aca6f7e 100644 --- a/include/linux/bitfield.h +++ b/include/linux/bitfield.h @@ -77,7 +77,7 @@ */ #define FIELD_FIT(_mask, _val) \ ({ \ - __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \ + __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \ !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \ })
On Tue, Jul 07, 2020 at 02:16:41PM -0700, Nick Desaulniers wrote:
From: Jakub Kicinski kuba@kernel.org
When ur_load_imm_any() is inlined into jeq_imm(), it's possible for the compiler to deduce a case where _val can only have the value of -1 at compile time. Specifically,
/* struct bpf_insn: _s32 imm */ u64 imm = insn->imm; /* sign extend */ if (imm >> 32) { /* non-zero only if insn->imm is negative */ /* inlined from ur_load_imm_any */ u32 __imm = imm >> 32; /* therefore, always 0xffffffff */ if (__builtin_constant_p(__imm) && __imm > 255) compiletime_assert_XXX()
This can result in tripping a BUILD_BUG_ON() in __BF_FIELD_CHECK() that checks that a given value is representable in one byte (interpreted as unsigned).
FIELD_FIT() should return true or false at runtime for whether a value can fit for not. Don't break the build over a value that's too large for the mask. We'd prefer to keep the inlining and compiler optimizations though we know this case will always return false.
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/kernel-hardening/CAK7LNASvb0UDJ0U5wkYYRzTAdnEs64HjXp... Reported-by: Masahiro Yamada masahiroy@kernel.org Debugged-by: Sami Tolvanen samitolvanen@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Nick Desaulniers ndesaulniers@google.com
include/linux/bitfield.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h index 48ea093ff04c..4e035aca6f7e 100644 --- a/include/linux/bitfield.h +++ b/include/linux/bitfield.h @@ -77,7 +77,7 @@ */ #define FIELD_FIT(_mask, _val) \ ({ \
__BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \
!((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \ })__BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \
I confirmied that this fixes the issue. Thanks for sending the patch!
Sami
On 7/7/20 4:16 PM, Nick Desaulniers wrote:
From: Jakub Kicinski kuba@kernel.org
When ur_load_imm_any() is inlined into jeq_imm(), it's possible for the compiler to deduce a case where _val can only have the value of -1 at compile time. Specifically,
/* struct bpf_insn: _s32 imm */ u64 imm = insn->imm; /* sign extend */ if (imm >> 32) { /* non-zero only if insn->imm is negative */ /* inlined from ur_load_imm_any */ u32 __imm = imm >> 32; /* therefore, always 0xffffffff */ if (__builtin_constant_p(__imm) && __imm > 255) compiletime_assert_XXX()
This can result in tripping a BUILD_BUG_ON() in __BF_FIELD_CHECK() that checks that a given value is representable in one byte (interpreted as unsigned).
Why does FIELD_FIT() pass an unsigned long long value as the second argument to __BF_FIELD_CHECK()? Could it pass (typeof(_mask))0 instead? It wouldn't fix this particular case, because UR_REG_IMM_MAX is also defined with that type. But (without working through this in more detail) it seems like there might be a solution that preserves the compile-time checking.
A second comment about this is that it might be nice to break __BF_FIELD_CHECK() into the parts that verify the mask (which could be used by FIELD_FIT() here) and the parts that verify other things.
That's all--just questions, I have no problem with the patch...
-Alex
FIELD_FIT() should return true or false at runtime for whether a value can fit for not. Don't break the build over a value that's too large for the mask. We'd prefer to keep the inlining and compiler optimizations though we know this case will always return false.
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/kernel-hardening/CAK7LNASvb0UDJ0U5wkYYRzTAdnEs64HjXp... Reported-by: Masahiro Yamada masahiroy@kernel.org Debugged-by: Sami Tolvanen samitolvanen@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Nick Desaulniers ndesaulniers@google.com
include/linux/bitfield.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h index 48ea093ff04c..4e035aca6f7e 100644 --- a/include/linux/bitfield.h +++ b/include/linux/bitfield.h @@ -77,7 +77,7 @@ */ #define FIELD_FIT(_mask, _val) \ ({ \
__BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \
!((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \ })__BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \
On Tue, Jul 7, 2020 at 3:43 PM Alex Elder elder@linaro.org wrote:
On 7/7/20 4:16 PM, Nick Desaulniers wrote:
From: Jakub Kicinski kuba@kernel.org
When ur_load_imm_any() is inlined into jeq_imm(), it's possible for the compiler to deduce a case where _val can only have the value of -1 at compile time. Specifically,
/* struct bpf_insn: _s32 imm */ u64 imm = insn->imm; /* sign extend */ if (imm >> 32) { /* non-zero only if insn->imm is negative */ /* inlined from ur_load_imm_any */ u32 __imm = imm >> 32; /* therefore, always 0xffffffff */ if (__builtin_constant_p(__imm) && __imm > 255) compiletime_assert_XXX()
This can result in tripping a BUILD_BUG_ON() in __BF_FIELD_CHECK() that checks that a given value is representable in one byte (interpreted as unsigned).
Hi Alex, Thanks for taking a look. They're good and fair questions.
Why does FIELD_FIT() pass an unsigned long long value as the second argument to __BF_FIELD_CHECK()?
Was Jakub's suggestion; I don't feel strongly against it either way, though...
Could it pass (typeof(_mask))0 instead?
...might be nice to avoid implicit promotions and conversions if _mask is not the same sizeof _val.
It wouldn't fix this particular case, because UR_REG_IMM_MAX is also defined with that type. But (without working through this in more detail) it seems like there might be a solution that preserves the compile-time checking.
I'd argue the point of the patch is to not check at compile time for FIELD_FIT, since we have a case in drivers/net/ethernet/netronome/nfp/bpf/jit.c (jeq_imm()) that will always pass -1 (unintentionally due to compiler optimization).
A second comment about this is that it might be nice to break __BF_FIELD_CHECK() into the parts that verify the mask (which could be used by FIELD_FIT() here) and the parts that verify other things.
Like so? Jakub, WDYT? Or do you prefer v1+Alex's suggestion about using `(typeof(_mask))0` in place of 0ULL?
diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c index 311a5be25acb..938fc733fccb 100644 --- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c +++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c @@ -492,11 +492,12 @@ nfp_eth_set_bit_config(struct nfp_nsp *nsp, unsigned int raw_idx, return 0; }
-#define NFP_ETH_SET_BIT_CONFIG(nsp, raw_idx, mask, val, ctrl_bit) \ - ({ \ - __BF_FIELD_CHECK(mask, 0ULL, val, "NFP_ETH_SET_BIT_CONFIG: "); \ - nfp_eth_set_bit_config(nsp, raw_idx, mask, __bf_shf(mask), \ - val, ctrl_bit); \ +#define NFP_ETH_SET_BIT_CONFIG(nsp, raw_idx, mask, val, ctrl_bit) \ + ({ \ + __BF_FIELD_CHECK_MASK(mask, "NFP_ETH_SET_BIT_CONFIG: "); \ + __BF_FIELD_CHECK_VAL(mask, val, "NFP_ETH_SET_BIT_CONFIG: "); \ + nfp_eth_set_bit_config(nsp, raw_idx, mask, __bf_shf(mask), \ + val, ctrl_bit); \ })
/** diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h index 48ea093ff04c..79651867beb3 100644 --- a/include/linux/bitfield.h +++ b/include/linux/bitfield.h @@ -41,18 +41,26 @@
#define __bf_shf(x) (__builtin_ffsll(x) - 1)
-#define __BF_FIELD_CHECK(_mask, _reg, _val, _pfx) \ +#define __BF_FIELD_CHECK_MASK(_mask, _pfx) \ ({ \ BUILD_BUG_ON_MSG(!__builtin_constant_p(_mask), \ _pfx "mask is not constant"); \ BUILD_BUG_ON_MSG((_mask) == 0, _pfx "mask is zero"); \ + __BUILD_BUG_ON_NOT_POWER_OF_2((_mask) + \ + (1ULL << __bf_shf(_mask))); \ + }) + +#define __BF_FIELD_CHECK_VAL(_mask, _val, _pfx) \ + ({ \ BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \ ~((_mask) >> __bf_shf(_mask)) & (_val) : 0, \ _pfx "value too large for the field"); \ - BUILD_BUG_ON_MSG((_mask) > (typeof(_reg))~0ull, \ + }) + +#define __BF_FIELD_CHECK_REG(_mask, _reg, _pfx) \ + ({ \ + BUILD_BUG_ON_MSG((_mask) > (typeof(_reg))~0ULL, \ _pfx "type of reg too small for mask"); \ - __BUILD_BUG_ON_NOT_POWER_OF_2((_mask) + \ - (1ULL << __bf_shf(_mask))); \ })
/** @@ -64,7 +72,7 @@ */ #define FIELD_MAX(_mask) \ ({ \ - __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_MAX: "); \ + __BF_FIELD_CHECK_MASK(_mask, "FIELD_MAX: "); \ (typeof(_mask))((_mask) >> __bf_shf(_mask)); \ })
@@ -77,7 +85,7 @@ */ #define FIELD_FIT(_mask, _val) \ ({ \ - __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \ + __BF_FIELD_CHECK_MASK(_mask, "FIELD_FIT: "); \ !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \ }) @@ -91,7 +99,8 @@ */ #define FIELD_PREP(_mask, _val) \ ({ \ - __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_PREP: "); \ + __BF_FIELD_CHECK_MASK(_mask, "FIELD_PREP: "); \ + __BF_FIELD_CHECK_VAL(_mask, _val, "FIELD_PREP: "); \ ((typeof(_mask))(_val) << __bf_shf(_mask)) & (_mask); \ })
@@ -105,7 +114,8 @@ */ #define FIELD_GET(_mask, _reg) \ ({ \ - __BF_FIELD_CHECK(_mask, _reg, 0U, "FIELD_GET: "); \ + __BF_FIELD_CHECK_MASK(_mask, "FIELD_GET: "); \ + __BF_FIELD_CHECK_REG(_mask, _reg, "FIELD_GET: "); \ (typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask)); \ })
That's all--just questions, I have no problem with the patch...
-Alex
FIELD_FIT() should return true or false at runtime for whether a value can fit for not. Don't break the build over a value that's too large for the mask. We'd prefer to keep the inlining and compiler optimizations though we know this case will always return false.
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/kernel-hardening/CAK7LNASvb0UDJ0U5wkYYRzTAdnEs64HjXp... Reported-by: Masahiro Yamada masahiroy@kernel.org Debugged-by: Sami Tolvanen samitolvanen@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Nick Desaulniers ndesaulniers@google.com
include/linux/bitfield.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h index 48ea093ff04c..4e035aca6f7e 100644 --- a/include/linux/bitfield.h +++ b/include/linux/bitfield.h @@ -77,7 +77,7 @@ */ #define FIELD_FIT(_mask, _val) \ ({ \
__BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \
__BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \ !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \ })
-- Thanks, ~Nick Desaulniers
On 7/8/20 12:10 PM, Nick Desaulniers wrote:
On Tue, Jul 7, 2020 at 3:43 PM Alex Elder elder@linaro.org wrote:
On 7/7/20 4:16 PM, Nick Desaulniers wrote:
From: Jakub Kicinski kuba@kernel.org
When ur_load_imm_any() is inlined into jeq_imm(), it's possible for the compiler to deduce a case where _val can only have the value of -1 at compile time. Specifically,
/* struct bpf_insn: _s32 imm */ u64 imm = insn->imm; /* sign extend */ if (imm >> 32) { /* non-zero only if insn->imm is negative */ /* inlined from ur_load_imm_any */ u32 __imm = imm >> 32; /* therefore, always 0xffffffff */ if (__builtin_constant_p(__imm) && __imm > 255) compiletime_assert_XXX()
This can result in tripping a BUILD_BUG_ON() in __BF_FIELD_CHECK() that checks that a given value is representable in one byte (interpreted as unsigned).
Hi Alex, Thanks for taking a look. They're good and fair questions.
Why does FIELD_FIT() pass an unsigned long long value as the second argument to __BF_FIELD_CHECK()?
Was Jakub's suggestion; I don't feel strongly against it either way, though...
Could it pass (typeof(_mask))0 instead?
...might be nice to avoid implicit promotions and conversions if _mask is not the same sizeof _val.
It wouldn't fix this particular case, because UR_REG_IMM_MAX is also defined with that type. But (without working through this in more detail) it seems like there might be a solution that preserves the compile-time checking.
I'd argue the point of the patch is to not check at compile time for FIELD_FIT, since we have a case in drivers/net/ethernet/netronome/nfp/bpf/jit.c (jeq_imm()) that will always pass -1 (unintentionally due to compiler optimization).
I understand why something needs to be done to handle that case. There's fancy macro gymnastics in "bitfield.h" to add convenient build-time checks for usage problems; I just thought there might be something we could do to preserve the checking--even in this case. But figuring that out takes more time than I was willing to spend on it yesterday...
A second comment about this is that it might be nice to break __BF_FIELD_CHECK() into the parts that verify the mask (which could be used by FIELD_FIT() here) and the parts that verify other things.
Like so? Jakub, WDYT? Or do you prefer v1+Alex's suggestion about using `(typeof(_mask))0` in place of 0ULL?
Yes, very much like that! But you could do that as a follow-on instead, so as not to delay or confuse things.
Thanks.
-Alex
diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c index 311a5be25acb..938fc733fccb 100644 --- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c +++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c @@ -492,11 +492,12 @@ nfp_eth_set_bit_config(struct nfp_nsp *nsp, unsigned int raw_idx, return 0; }
-#define NFP_ETH_SET_BIT_CONFIG(nsp, raw_idx, mask, val, ctrl_bit) \
({ \
__BF_FIELD_CHECK(mask, 0ULL, val, "NFP_ETH_SET_BIT_CONFIG: "); \
nfp_eth_set_bit_config(nsp, raw_idx, mask, __bf_shf(mask), \
val, ctrl_bit); \
+#define NFP_ETH_SET_BIT_CONFIG(nsp, raw_idx, mask, val, ctrl_bit) \
({ \
__BF_FIELD_CHECK_MASK(mask, "NFP_ETH_SET_BIT_CONFIG:
"); \
__BF_FIELD_CHECK_VAL(mask, val,
"NFP_ETH_SET_BIT_CONFIG: "); \
nfp_eth_set_bit_config(nsp, raw_idx, mask,
__bf_shf(mask), \
val, ctrl_bit); \ })
/** diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h index 48ea093ff04c..79651867beb3 100644 --- a/include/linux/bitfield.h +++ b/include/linux/bitfield.h @@ -41,18 +41,26 @@
#define __bf_shf(x) (__builtin_ffsll(x) - 1)
-#define __BF_FIELD_CHECK(_mask, _reg, _val, _pfx) \ +#define __BF_FIELD_CHECK_MASK(_mask, _pfx) \ ({ \ BUILD_BUG_ON_MSG(!__builtin_constant_p(_mask), \ _pfx "mask is not constant"); \ BUILD_BUG_ON_MSG((_mask) == 0, _pfx "mask is zero"); \
__BUILD_BUG_ON_NOT_POWER_OF_2((_mask) + \
(1ULL << __bf_shf(_mask))); \
})
+#define __BF_FIELD_CHECK_VAL(_mask, _val, _pfx) \
({ \ BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \ ~((_mask) >> __bf_shf(_mask)) & (_val) : 0, \ _pfx "value too large for the field"); \
BUILD_BUG_ON_MSG((_mask) > (typeof(_reg))~0ull, \
})
+#define __BF_FIELD_CHECK_REG(_mask, _reg, _pfx) \
({ \
BUILD_BUG_ON_MSG((_mask) > (typeof(_reg))~0ULL, \ _pfx "type of reg too small for mask"); \
__BUILD_BUG_ON_NOT_POWER_OF_2((_mask) + \
(1ULL << __bf_shf(_mask))); \ })
/** @@ -64,7 +72,7 @@ */ #define FIELD_MAX(_mask) \ ({ \
__BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_MAX: "); \
__BF_FIELD_CHECK_MASK(_mask, "FIELD_MAX: "); \ (typeof(_mask))((_mask) >> __bf_shf(_mask)); \ })
@@ -77,7 +85,7 @@ */ #define FIELD_FIT(_mask, _val) \ ({ \
__BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \
__BF_FIELD_CHECK_MASK(_mask, "FIELD_FIT: "); \ !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \ })
@@ -91,7 +99,8 @@ */ #define FIELD_PREP(_mask, _val) \ ({ \
__BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_PREP: "); \
__BF_FIELD_CHECK_MASK(_mask, "FIELD_PREP: "); \
__BF_FIELD_CHECK_VAL(_mask, _val, "FIELD_PREP: "); \ ((typeof(_mask))(_val) << __bf_shf(_mask)) & (_mask); \ })
@@ -105,7 +114,8 @@ */ #define FIELD_GET(_mask, _reg) \ ({ \
__BF_FIELD_CHECK(_mask, _reg, 0U, "FIELD_GET: "); \
__BF_FIELD_CHECK_MASK(_mask, "FIELD_GET: "); \
__BF_FIELD_CHECK_REG(_mask, _reg, "FIELD_GET: "); \ (typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask)); \ })
That's all--just questions, I have no problem with the patch...
-Alex
FIELD_FIT() should return true or false at runtime for whether a value can fit for not. Don't break the build over a value that's too large for the mask. We'd prefer to keep the inlining and compiler optimizations though we know this case will always return false.
Cc: stable@vger.kernel.org Link: https://lore.kernel.org/kernel-hardening/CAK7LNASvb0UDJ0U5wkYYRzTAdnEs64HjXp... Reported-by: Masahiro Yamada masahiroy@kernel.org Debugged-by: Sami Tolvanen samitolvanen@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Nick Desaulniers ndesaulniers@google.com
include/linux/bitfield.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h index 48ea093ff04c..4e035aca6f7e 100644 --- a/include/linux/bitfield.h +++ b/include/linux/bitfield.h @@ -77,7 +77,7 @@ */ #define FIELD_FIT(_mask, _val) \ ({ \
__BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \
__BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \ !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \ })
-- Thanks, ~Nick Desaulniers
On Wed, Jul 8, 2020 at 10:34 AM Alex Elder elder@linaro.org wrote:
I understand why something needs to be done to handle that case. There's fancy macro gymnastics in "bitfield.h" to add convenient build-time checks for usage problems; I just thought there might be something we could do to preserve the checking--even in this case. But figuring that out takes more time than I was willing to spend on it yesterday...
I also find the use of 0U in FIELD_GET sticks out from the use of 0ULL or (0ull) in these macros (hard to notice, but I changed it in my diff to 0ULL). Are there implicit promotion+conversion bugs here? I don't know, but I'd rather not think about it by just using types of the same width and signedness.
A second comment about this is that it might be nice to break __BF_FIELD_CHECK() into the parts that verify the mask (which could be used by FIELD_FIT() here) and the parts that verify other things.
Like so? Jakub, WDYT? Or do you prefer v1+Alex's suggestion about using `(typeof(_mask))0` in place of 0ULL?
Yes, very much like that! But you could do that as a follow-on instead, so as not to delay or confuse things.
No rush; let's get it right.
So I can think of splitting this into maybe 3 patches, based on feedback: 1. there's a bug in compile time validating _val in FIELD_FIT, since we want to be able to call it at runtime with "bad" values. 2. the FIELD_* macros use constants (0ull, 0ULL, 0U) that don't match typeof(_mask). 3. It might be nice to break up __BF_FIELD_CHECK.
I don't think anyone's raised an objection to 1.
Assuming Jakub is ok with 3, fixing 3 will actually also address 2. So then we don't need 3 patches; only 2. But if we don't do 3 first, then I have to resend a v2 of 1 anyways to address 2 (which was Alex's original feedback).
My above diff was all three in one go, but I don't think it would be unreasonable to break it up into 3 then 1.
If we prefer not to do 3, then I can send a v2 of 1 that addresses the inconsistent use of types, as one or two patches.
Jakub, what is your preference?
(Also, noting that I'm sending to David, assuming he'll pick up the patches once we have everyone's buy in? Or is there someone else more appropriate to accept changes to this header? I guess Jakub and David are the listed maintainers for drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c)
On Wed, 8 Jul 2020 10:56:43 -0700 Nick Desaulniers wrote:
On Wed, Jul 8, 2020 at 10:34 AM Alex Elder elder@linaro.org wrote:
I understand why something needs to be done to handle that case. There's fancy macro gymnastics in "bitfield.h" to add convenient build-time checks for usage problems; I just thought there might be something we could do to preserve the checking--even in this case. But figuring that out takes more time than I was willing to spend on it yesterday...
I also find the use of 0U in FIELD_GET sticks out from the use of 0ULL or (0ull) in these macros (hard to notice, but I changed it in my diff to 0ULL). Are there implicit promotion+conversion bugs here? I don't know, but I'd rather not think about it by just using types of the same width and signedness.
TBH I just copied the type from other arguments. It doesn't matter in practice now in this case. I have no preference.
A second comment about this is that it might be nice to break __BF_FIELD_CHECK() into the parts that verify the mask (which could be used by FIELD_FIT() here) and the parts that verify other things.
Like so? Jakub, WDYT? Or do you prefer v1+Alex's suggestion about using `(typeof(_mask))0` in place of 0ULL?
Yes, very much like that! But you could do that as a follow-on instead, so as not to delay or confuse things.
No rush; let's get it right.
So I can think of splitting this into maybe 3 patches, based on feedback:
- there's a bug in compile time validating _val in FIELD_FIT, since
we want to be able to call it at runtime with "bad" values. 2. the FIELD_* macros use constants (0ull, 0ULL, 0U) that don't match typeof(_mask). 3. It might be nice to break up __BF_FIELD_CHECK.
I don't think anyone's raised an objection to 1.
Assuming Jakub is ok with 3, fixing 3 will actually also address 2. So then we don't need 3 patches; only 2. But if we don't do 3 first, then I have to resend a v2 of 1 anyways to address 2 (which was Alex's original feedback).
My above diff was all three in one go, but I don't think it would be unreasonable to break it up into 3 then 1.
If we prefer not to do 3, then I can send a v2 of 1 that addresses the inconsistent use of types, as one or two patches.
Jakub, what is your preference?
I don't see much point in breaking up the checking macro. But even less in arguing either way :)
(Also, noting that I'm sending to David, assuming he'll pick up the patches once we have everyone's buy in? Or is there someone else more appropriate to accept changes to this header? I guess Jakub and David are the listed maintainers for drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp_eth.c)
Seems reasonable, put [PATCH net] in the subject to make that explicit.
linux-stable-mirror@lists.linaro.org