Hello,
I encountered an issue while importing an ssh device using lava-tool.
First, my lava-server is running inside a docker container combined with
volumes for the following directory trees :
[ … extracted from docker-compose file ]
- /boot:/boot
- /lib/modules:/lib/modules
- /dev/bus/usb:/dev/bus/usb
- /root/.ssh:/root/.ssh:ro
- lava:/var/lib/lava:rw
- lava-server:/var/lib/lava-server:rw
- lava-server-etc:/etc/lava-server:rw
- postgresql:/var/lib/postgresql:rw
- logs:/var/log:rw
- ssl:/etc/ssl:rw
lava, lava-server, lava-server-etc, postgresql, logs and ssl are named
volume in docker, and, as you can see with read / write permissions.
I can make some config with django and all seems to work well when I access
to a job ( I can see yaml def, logs …)
The problem is that when I want to import a dictionary for a device using
lava-tool.
My dictionary is the following, and works well on a classical server
without any dockerization.
{% extends 'ssh.jinja2' %}
{% set ssh_id = '/root/.ssh/id_rsa' %}
{% set ssh_host = '10.0.0.2' %}
The command I use with lava-tool is the following :
lava-tool device-dictionary --update
/etc/lava-server/dispatcher-config/devices/pcmquad-ssh.jinja2
http://admin@172.18.0.2/RPC2 pcmquad-ssh
and I get the following output :
Updating device dictionary for pcmquad-ssh on http://admin@172.18.0.2/RPC2
<Fault 400: 'Unable to store the configuration for pcmquad-ssh on disk'>
I don't really understand why I can not import the dictionary and I would
like to know where the dictionary is physically imported to check if there
could be any issue in my docker volumes.
Thanks in advance for any reply.
Jonathan
Hi,
I'm using a job definition template to submit lava jobs, basically, I
use something like
path: automated/linux/{test_suite}/{test_suite}.yaml
but I find that lmbench it doesn't use this template.
can I send a patch to rename "automated/linux/lmbench/lmbench-memory.yaml"
to "automated/linux/lmbench-memory/lmbench-memory.yaml "
just like ltp-realtime does
lyang001@pek-lyang0-d1:/tmp/tt/test-definitions$ ls
automated/linux/ltp-realtime/ltp-realtime.yaml
automated/linux/ltp-realtime/ltp-realtime.yaml
Lei
Hi,
I ran into the following stack trace [0].
IMHO deployment_data should be always set, right?
Best,
lynxis
[0] https://lava.fe80.eu/scheduler/job/63
Traceback (most recent call last):
File "/usr/bin/lava", line 11, in <module>
load_entry_point('lava-tool==0.21', 'console_scripts', 'lava')()
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line 150, in run
raise SystemExit(cls().dispatch(args))
File "/usr/lib/python2.7/dist-packages/lava/tool/dispatcher.py", line 140, in dispatch
return command.invoke()
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py", line 226, in invoke
job_runner, job_data = self.parse_job_file(self.args.job_file, oob_file)
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py", line 286, in parse_job_file
env_dut=env_dut)
File "/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py", line 172, in parse
test_info, test_counts[namespace])
File "/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/parser.py", line 68, in parse_action
Deployment.select(device, parameters)(pipeline, parameters)
File "/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/ssh.py", line 54, in __init__
parent.add_action(self.action, parameters)
File "/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/action.py", line 172, in add_action
action.populate(parameters)
File "/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/actions/deploy/ssh.py", line 95, in populate
tar_flags = parameters['deployment_data']['tar_flags'] if 'tar_flags' in parameters['deployment_data'].keys() else ''
KeyError: 'deployment_data'
--
Alexander Couzens
mail: lynxis(a)fe80.eu
jabber: lynxis(a)fe80.eu
mobile: +4915123277221
gpg: 390D CF78 8BF9 AA50 4F8F F1E2 C29E 9DA6 A0DF 8604
Hi,
As you might have noticed, there are couple of 'new' directories in
the test-definitions.git [1] repository. This is an attempt to refresh
the approach to test execution. There are 2 main reasons behind it:
- decouple from LAVA helper scripts (as much as possible)
- allow local execution of the scripts outside LAVA
All 'new' tests are now placed in the 'automated/' and 'manual/'
paths. The old layout should now be considered obsolete. This means
that the following directories are no longer updated and will be
deleted:
- android
- common
- fedora
- openembedded
- ubuntu
Please check if the test you're using is already included in the
'automated/' directory. There are 2 subdirs - android and linux. These
are the ones containing tests (each test in separate directory). If
the test you're currently using isn't there, please reply to this
thread with the details. The plan is to migrate all tests that are in
use. Some of the tests in the above mentioned directories are
abandoned for a long time. Such tests won't be migrated.
I'm planning to delete the deprecated directories from the repository
by the end of June 2017.
[1] https://git.linaro.org/qa/test-definitions.git
Best Regards,
milosz
Hello,
The LITE team appreciates bootstrapping of Zephyr-related LAVA testing
done by LAVA, LAVA Lab, B&B and QA teams. It was quite a backlogged
task for ourselves to be more involved with LAVA testing, and
hopefully, the time has come ;-).
I've reviewed the current status of on-device testing for Zephyr CI
jobs and see the following picture (feel free to correct me if
something is wrong are missing): "zephyr-upstream" and
"zephyr-upstream-arm" (https://ci.linaro.org/view/lite-iot-ci/) CI jobs
submit a number of tests to LAVA (via https://qa-reports.linaro.org/)
for the following boards: arduino_101, frdm_k64f, frdm_kw41z,
qemu_cortex_m3. Here's an example of cumulative test report for these
platforms: https://qa-reports.linaro.org/lite/zephyr-upstream/tests/
That's really great! (Though the list of tests to run in LAVA seems to
be hardcoded:
https://git.linaro.org/ci/job/configs.git/tree/zephyr-upstream/submit_for_t…)
But we'd like to test things beyond Zephyr testsuite, for example,
application frameworks (JerryScript, Zephyr.js, MicroPython) and
the mcuboot bootloader. For starters, we'd like to perform just a boot
test to make sure that each application can boot and start up, then
later hopefully to extend that to functional testing.
The most basic testing would be just check that after boot there's an
expected prompt from each of the apps, i.e. test it in "passive" manner,
similar to Zephyr unittests discussed above. I tried this with
Zephyr.js and was able to make it work (with manual submission so far):
https://validation.linaro.org/scheduler/job/1534097 . A peculiarity in
this case is that the default test app of Zephyr.js outputs just a
single line "Hello, ZJS world!", whereas LAVA's test/monitors test
job config specifies testsuite begin pattern, end pattern, and testcase
patterns, and I had a suspicion that each of them need to be on a
separate line. But I was able to make it pass with the following config:
- test:
monitors:
- name: foo
start: ""
end: Hello, ZJS world!
pattern: (?P<result>(PASS|FAIL))\s-\s(?P<test_case_id>\w+)\.
So, the "start" substring is empty, and perhaps matches a line output by
a USB multiplexer or board bootloader. "End" substring is actually the
expected single-line output. And "pattern" is unused (dunno if it can
be dropped without def file syntax error). Is there a better way to
handle single-line test output?
Well, beyond a simple output matching, it would be nice even for the
initial "smoke testing" to actually make some input into the application
and check the expected output (e.g., input: "2+2", expected output:
"4"). Is this already supported for LAVA "v2" pipeline tests? I may
imagine that would be the same kind of functionality required to test
bootloaders like U-boot for Linux boards.
Thanks,
Paul
Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linarohttp://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
Hi Team,
Splice test case is newly running from kselftest (after updating Makefile)
splice stall on LAVA and pass on Local HiKey running linux-next and
linux-rc-4.9.
And tested with set -e from run_kselftest.sh and still pass.
The question is Why stall on LAVA? i re-submitted job couple of times
and able to reproduce that issue.
The problem is coming from
#!/bin/sh
n=`./default_file_splice_read </dev/null | wc -c`
test "$n" = 0 && exit 0
echo "default_file_splice_read broken: leaked $n"
exit 1
Seems like LAVA not happy having "</dev/null" in any script or exit 0 via LXC.
I request to investigate this problem.
Running tests in splice
========================================
selftests: default_file_splice_read [PASS]
selftests: default_file_splice_read.sh [PASS]
Test case source:
https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git/t…
LAVA job id:
https://lkft.validation.linaro.org/scheduler/job/8875#L2574https://lkft.validation.linaro.org/scheduler/job/8806#L3077
Best regards
Naresh Kamboju