On Thu, Jul 31, 2025 at 5:41 PM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
well, random-key update when the map is full is also quite different from random-key update when the map is empty.
Instead doing an update from user space do timed ops: 1 start with empty map, update (aka insert) all keys sequentially 2 lookup all sequentially 3 delete all sequentially 4 update (aka insert) all sequentially 5 lookup random 6 update random 7 delete all random
The elapsed time for 1 and 4 should be exactly the same. While all others might have differences, but all can be compared to each other and reasoned about.
Having both sequential and random access for the benchmarks is fine, but as far as I can tell the scheme you propose is not how the bpf bench framework is implemented.
Plus, handing off a map between subtests is brittle and prone to error. What if I just want to investigate the sequential access update time? The cost of the most expensive op (probably delete) is going to dwarf all over timings making it difficult to separate them and this scheme is going to be susceptible to noise if I can't crank up the number of iterations without altering the number of entries in the map. Microbenchmarks mitigate noise/run-to-run variance by doing a single op over and over again.
I agree we need a better approach to timing deletes than what I suggested but I don't think is it.