Depending on timing in a test is quite brittle in general: can we mock the
timeout instead and make this fully deterministic somehow?
FIXME: This test is fragile because it relies on time which can
be affected by system performance. In particular we are currently
assuming that `short.py` can be successfully executed within 2
seconds of wallclock time.
Maybe "short.py" can be replaced by adding into lit itself a "no op" which
would just not really spawn a process and instead mark the task as
completed immediately internally?
--
Mehdi
On Wed, Sep 16, 2020 at 10:24 PM David Blaikie via llvm-dev <
llvm-dev@lists.llvm.org> wrote:
> I appreciate the value of the feature - but it's possible the test
> doesn't pull its weight. Is the code that implements the feature
> liable to failure/often touched? If it's pretty static/failure is
> unlikely, possibly the time and flaky failures aren't worth the value
> of possibly catching a low-chance bug.
>
> Another option might be to reduce how often/in which configurations
> the test is run - LLVM_ENABLE_EXPENSIVE_CHECKS presumably only works
> for code within LLVM itself, and not test cases - but maybe I'm wrong
> there & this parameter could be used (& then the timing bumped up
> quite a bit to try to make it much more reliable), or something
> similar could be implemented at the lit check level?
>
> Ah, compiler-rt tests use EXPENSIVE_CHECKS to disable certain tests:
>
> ./compiler-rt/test/lit.common.configured.in:
> set_default("expensive_checks",
> @LLVM_ENABLE_EXPENSIVE_CHECKS_PYBOOL@)
> ./compiler-rt/test/fuzzer/large.test:UNSUPPORTED: expensive_checks
>
> Could you bump the timeouts a fair bit and disable the tests except
> under expensive checks?
>
> On Wed, Sep 16, 2020 at 9:31 PM Dan Liew
dan@su-root.co.uk wrote:
> >
> > Hi David,
> >
> > Unfortunately writing a reliable test is tricky given that the
> > functionality we're trying to test involves timing. I would advise
> > against disabling the test entirely because it actually tests
> > functionality that people use. I'd suggest bumping up the time limits.
> > This is what I've done in the past. See
> >
> > commit 6dfcc78364fa3e8104d6e6634733863eb0bf4be8
> > Author: Dan Liew
dan@su-root.co.uk
> > Date: Tue May 22 15:06:29 2018 +0000
> >
> > [lit] Try to make `shtest-timeout.py` test more reliable by using a
> > larger timeout value. This really isn't very good because it will
> > still be susceptible to machine performance.
> >
> > While we are here also fix a bug in validation of
> > `maxIndividualTestTime` where previously it wasn't checked if the
> > type was an int.
> >
> > rdar://problem/40221572
> >
> > llvm-svn: 332987
> >
> > HTH,
> > Dan.
> >
> > On Wed, 16 Sep 2020 at 09:37, David Blaikie
dblaikie@gmail.com wrote:
> > >
> > > Ping on this
> > >
> > > On Wed, Sep 9, 2020 at 8:27 PM David Blaikie
dblaikie@gmail.com
> wrote:
> > > >
> > > > The clang-cmake-armv8-lld (linaro-toolchain owners) buildbot is
> timing out trying to run some timeout tests (Dan Liew author):
> > > >
> > > > Pass:
>
http://lab.llvm.org:8011/builders/clang-cmake-armv8-lld/builds/5672
> > > > Fail:
>
http://lab.llvm.org:8011/builders/clang-cmake-armv8-lld/builds/5673
> > > >
> > > > Is there anything we can do to the buildbot? Or the tests? (bump up
> the time limits or maybe remove the tests as unreliable?)
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev@lists.llvm.org
>
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>