It used to be that mix_interrupt_randomness() would credit 1 bit each
time it ran, and so add_interrupt_randomness() would schedule mix() to
run every 64 interrupts, a fairly arbitrary number, but nonetheless
considered to be a decent enough conservative estimate.
Since e3e33fc2ea7f ("random: do not use input pool from hard IRQs"),
mix() is now able to credit multiple bits, depending on the number of
calls to add(). This was done for reasons separate from this commit, but
it has the nice side effect of enabling this patch to schedule mix()
less often.
Currently the rules are:
a) Credit 1 bit for every 64 calls to add().
b) Schedule mix() once a second that add() is called.
c) Schedule mix() once every 64 calls to add().
Rules (a) and (c) no longer need to be coupled. It's still important to
have _some_ value in (c), so that we don't "over-saturate" the fast
pool, but the once per second we get from rule (b) is a plenty enough
baseline. So, by increasing the 64 in rule (c) to something larger, we
avoid calling queue_work_on() as frequently during irq storms.
This commit changes that 64 in rule (c) to be 1024, which means we
schedule mix() 16 times less often. And it does *not* need to change the
64 in rule (a).
Fixes: 58340f8e952b ("random: defer fast pool mixing to worker")
Cc: stable(a)vger.kernel.org
Cc: Sebastian Andrzej Siewior <bigeasy(a)linutronix.de>
Cc: Dominik Brodowski <linux(a)dominikbrodowski.net>
Signed-off-by: Jason A. Donenfeld <Jason(a)zx2c4.com>
---
drivers/char/random.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 655e327d425e..d0e4c89c4fcb 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1009,7 +1009,7 @@ void add_interrupt_randomness(int irq)
if (new_count & MIX_INFLIGHT)
return;
- if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ))
+ if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
return;
if (unlikely(!fast_pool->mix.func))
--
2.35.1
--
Hello,
My name is Steve Dibenedetto.I apologize to have contacted you this way
without a direct relationship. There is an opportunity to collaborate
with me in the sourcing of some materials needed by our company for
production of the different medicines we are researching.
I'm aware that this might be totally outside your professional
specialization, but it will be a great source for generating extra
revenue. I discovered a manufacturer who can supply us at a lower rate
than our company's previous purchases.
I will give you more specific details when/if I receive feedback from
you showing interest.
Warm Regards
Steve Dibenedetto
Production & Control Manager,
Green Field Laboratories
Gothic House, Barker Gate,
Nottingham, NG1 1JU,
United Kingdom.
test_bit(), as any other bitmap op, takes `unsigned long *` as a
second argument (pointer to the actual bitmap), as any bitmap
itself is an array of unsigned longs. However, the ia64_get_irr()
code passes a ref to `u64` as a second argument.
This works with the ia64 bitops implementation due to that they
have `void *` as the second argument and then cast it later on.
This works with the bitmap API itself due to that `unsigned long`
has the same size on ia64 as `u64` (`unsigned long long`), but
from the compiler PoV those two are different.
Define @irr as `unsigned long` to fix that. That implies no
functional changes. Has been hidden for 16 years!
Fixes: a58786917ce2 ("[IA64] avoid broken SAL_CACHE_FLUSH implementations")
Cc: stable(a)vger.kernel.org # 2.6.16+
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Alexander Lobakin <alexandr.lobakin(a)intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko(a)linux.intel.com>
Reviewed-by: Yury Norov <yury.norov(a)gmail.com>
---
arch/ia64/include/asm/processor.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/processor.h
index 7cbce290f4e5..757c2f6d8d4b 100644
--- a/arch/ia64/include/asm/processor.h
+++ b/arch/ia64/include/asm/processor.h
@@ -538,7 +538,7 @@ ia64_get_irr(unsigned int vector)
{
unsigned int reg = vector / 64;
unsigned int bit = vector % 64;
- u64 irr;
+ unsigned long irr;
switch (reg) {
case 0: irr = ia64_getreg(_IA64_REG_CR_IRR0); break;
--
2.36.1
commit 4ddc844eb81da59bfb816d8d52089aba4e59e269 upstream.
in current Linux, MTU policing does not take into account that packets at
the TC ingress have the L2 header pulled. Thus, the same TC police action
(with the same value of tcfp_mtu) behaves differently for ingress/egress.
In addition, the full GSO size is compared to tcfp_mtu: as a consequence,
the policer drops GSO packets even when individual segments have the L2 +
L3 + L4 + payload length below the configured valued of tcfp_mtu.
Improve the accuracy of MTU policing as follows:
- account for mac_len for non-GSO packets at TC ingress.
- compare MTU threshold with the segmented size for GSO packets.
[dcaratti: fix conflicts due to lack of the following commits:
- commit 2ffe0395288a ("net/sched: act_police: add support for
packet-per-second policing")
- commit afe231d32eb5 ("selftests: forwarding: Add tc-police tests")
- commit 53b61f29367d ("selftests: forwarding: Add tc-police tests for
packets per second")]
Link: https://lore.kernel.org/netdev/876d597a0ff55f6ba786f73c5a9fd9eb8d597a03.164…
Signed-off-by: Davide Caratti <dcaratti(a)redhat.com>
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
---
net/sched/act_police.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/net/sched/act_police.c b/net/sched/act_police.c
index 8fd23a8b88a5..a7660b602237 100644
--- a/net/sched/act_police.c
+++ b/net/sched/act_police.c
@@ -213,6 +213,20 @@ static int tcf_police_init(struct net *net, struct nlattr *nla,
return err;
}
+static bool tcf_police_mtu_check(struct sk_buff *skb, u32 limit)
+{
+ u32 len;
+
+ if (skb_is_gso(skb))
+ return skb_gso_validate_mac_len(skb, limit);
+
+ len = qdisc_pkt_len(skb);
+ if (skb_at_tc_ingress(skb))
+ len += skb->mac_len;
+
+ return len <= limit;
+}
+
static int tcf_police_act(struct sk_buff *skb, const struct tc_action *a,
struct tcf_result *res)
{
@@ -235,7 +249,7 @@ static int tcf_police_act(struct sk_buff *skb, const struct tc_action *a,
goto inc_overlimits;
}
- if (qdisc_pkt_len(skb) <= p->tcfp_mtu) {
+ if (tcf_police_mtu_check(skb, p->tcfp_mtu)) {
if (!p->rate_present) {
ret = p->tcfp_result;
goto end;
--
2.35.3