Hi,
As we're approaching BKK19 I'd like to compose a list of topics to cover there:
- test coverage wrt test plan or requirements
There are at least to tickets about this:
https://github.com/Linaro/squad/issues/472https://github.com/Linaro/squad/issues/205
- l10n and i18n
I reported a ticket. This seems to be a fairly easy thing to do but
requires quite some effort to prepare all translations:
https://github.com/Linaro/squad/issues/491
Apart from that I don't have any other priorities. If you think that
some other feature or bug is important please reply to this thread
with your proposal.
milosz
Hi all,
I wanted to know if you plan to integrate filters in project comparison ?
For example, I run CTS, so I have a lot of test cases, and I'd like to see
only differences between 2 builds of 2 different projects.
Let me know what you think :)
Axel
On Mon, 28 Jan 2019 at 16:21, Dan Rue <dan.rue(a)linaro.org> wrote:
>
> On Mon, Jan 28, 2019 at 09:24:26AM +0000, Milosz Wasilewski wrote:
> > On Fri, 25 Jan 2019 at 23:00, Dan Rue <dan.rue(a)linaro.org> wrote:
> > >
> > > More updates! Described below, referenced links go to source:
> > > - beaglebone-black is now working for me [1]
> > > - ser2net containerized [2]
> > > - LAVA upgrade process is documented [3]
> > > - Squid container added; nginx images hack removed [4]
> > >
> > > The beaglebone-black branch represents what's now an actual working
> > > docker-compose environment for my bbb, using a recent u-boot (this
> > > turned out to be the hardest part - totally unrelated to docker). I
> > > ended up running NFS and TFTP on the host and mounting the paths into
> > > the dispatcher. I'd like to containerize those still, but NFS is a bit
> > > difficult in particular and I just wanted to see things work.
> > >
> > > The beaglebone-black branch is back to using the dispatcher without
> > > rebuilding it. I did this by breaking ser2net into its own container
> > > that can be found at danrue/ser2net and used as follows:
> > >
> > > version: '3.4'
> > > services:
> > > ser2net:
> > > image: danrue/ser2net:3.5
> > > volumes:
> > > - ./ser2net/ser2net.conf:/etc/ser2net.conf
> > > devices:
> > > - /dev/serial/by-id/usb-Silicon_Labs_CP2102_USB_to_UART_Bridge_Controller_0001-if00-port0
> > >
> > > The best part is running something like this to spy on the serial port during testing:
> > >
> > > docker-compose exec dispatcher telnet ser2net 5001
> > >
> > > The LAVA upgrade has been documented in the README, but it's simple
> > > enough I'll reproduce it here:
> > >
> > > 1. Stop containers.
> > > 2. Back up pgsql from its docker volume
> > >
> > > sudo tar cvzf lava-server-pgdata-$(date +%Y%m%d).tgz /var/lib/docker/volumes/lava-server-pgdata
> > >
> > > 3. Change e.g. `lavasoftware/amd64-lava-server:2018.11` to
> > > `lavasoftware/amd64-lava-server:2019.01` and
> >
> > Is the content of /var/lib/lava-server/default/media/job-output/ also
> > preserved in this scenario? If not, this dir should also probably be
> > mapped into a volume so it is moved between migrated versions.
>
> Oh, good catch. Fixed with a docker volume @
> https://github.com/danrue/lava-docker-compose/blob/master/docker-compose.ym…
>
Today I managed to get LAVA and SQUAD working together in
containerized setup. Here is the repository with docker-compose:
https://github.com/mwasilew/lava-docker-compose
I didn't update README yet. It's still not ideal as there might be
race conditions starting SQUAD (only first time when DB is not yet
populated). One big issue is lack of cmd line tools for user
management in SQUAD. This means that admin user can be added but
password can't be set. I'm planning to copy this code from LAVA to
provide the same options. So far I managed to submit 1 QEMU job to
LAVA via SQUAD proxy and retrieve the results once the job finished.
Comments are welcome :)
milosz
Hi all,
After looking to squad-worker's log, I can see it reports "WorkerLostError".
On the web ui, the last build on project summary page is not updated
with last test results, but these results are visible if we click on this
build.
Moreover, the build is still on "unfinished" state while there is no test
running,
and it gives no reason about why the build is considered unfinished.
Is it possible the build is not updated because of the worker facing errors
?
You can see the worker's log in attachment.
Best regards,
Axel
Hi,
Yesterday during SQUAD sync we discussed strategies for marking
baselines instead of having them marked automatically as a previous
builds. This has been suggested before here, and here.
By default, every finished build becomes the baseline of the next
finished build.
Two alternatives to tell SQUAD not to mark a finished build as
baseline are as follows (to be implemented):
a) go to build's page > build settings > check 'ignore as baseline'
option; anyone with write permissions to that build's project is
allowed to change its settings;
b) calling qa-reports.l.o/api/createbuild passing 'is_baseline=false'
as POST before calling qa-reports.l.o/api/submitjob
In absence of 'is_baseline', the build will be marked as baseline
by default, for backwards compatibility.
Before discussing edge cases, take example below (considering option b
above is being used to submit builds to Squad):
+------------+-----------------+-------------+
| build | is_baseline | baseline |
+------------+---------------- +-------------+
| build101 | false | build100 |
| build102 | false | build100 |
| build103 | ----- | build100 |
| build104 | ----- | build103 |
| build105 | false | build104 |
| build106 | false | build104 |
+------------+-----------------+-------------+
1. If build104 is later manually marked as not being a baseline,
through UI, then build103 automatically becomes the current baseline.
2. If in the event above, Squad will *NOT* re-run test comparisons for
build106 and build105 against build103.
I think we discussed other edge cases, but I can't remember them now.
Charles
Hi everyone,
Atm, I'm running 2 servers.
One is for staging and one for production.
On each server, I have one docker running LAVA and one docker running SQUAD.
>From SQUAD-PROD to LAVA-STAGING, I can submit job, fetch results etc,
everything is working fine.
Same from SQUAD-STAGING to LAVA-PROD.
But, from SQUAD-PROD to LAVA-PROD, or from SQUAD-STAGING to LAVA-STAGING,
the test submission fails.
Squad-worker returns and error saying connection timed out.
If I curl LAVA-PROD from SQUAD-PROD, it works. Same if I curl LAVA-STAGING
from SQUAD-STAGING. So it looks like they can communicate after all.
Any idea what could be the problem?
Best regards,
Axel
Hi everyone,
I'm using Bamboo to build new binaries. My goal is to setup my builder to
trigger
Squad to submit tests to LAVA using the latest binaries.
This part is done but I still have a problem. If there are regressions with
new binaries,
Bamboo will still say the last build is good because it only performs curl
request and that's it.
My question is, is there a way to get a feedback from Squad concerning
tests results ? For example the number of fail/pass. This would be enough
to me, I'd just have to write
something to compare 2 builds.
Regards,
Axel
Hi,
I'm preparing some promotional materials for squad before conference
season starts. If anyone has an idea for a logo, please don't hesitate
to share it.
milosz
Dear linaro devlopers
Thanks to good answers, it is going well.
I have a question.
Squad fetches the result of lava, the interval is too long.
This is true even if you reduce the poll interval of the backend setting
in SQUAD.
It will fetch at least 1 hour later.
I want to fetch interval 30 minutes.
Is there a solution?
thanks
suker
Dear linaro
I am a software engineer and working for Nexell Corporation in South Korea.
Recently, I found SQUAD and interested.
I downloaded the source from github and tried to run the docker and it
on the local machine.
Unfortunately, SQUAD admistration is complicated and the setting method
is unknown.
For example, to connect with Jenkins and SQUAD, *I do not know how to
set it in SQUAD.*
Already I saw below page.
(https://squad.readthedocs.io/en/latest/api.html#projects-api-projects)
Jenkins and LAVA are already in operation in my company.
Thanks
ChoongHyun Jeon.
Dear squad developer
I used to squad docker of https://github.com/Linaro/squad.
And squad system running in my localhost. LAVA too.
I want check the submit test for squad.
Using a python script submit_for_testing.py in
https://git.linaro.org/ci/job/configs.git
command is below :
python submit_for_testing.py --device-type abc-abc-abc --build-number no1
--lava-server https://192.168.1.20:9099 --qa-server http://192.168.1.70:8000
--qa-server-project remote-lava-prj --qa-server-team remote-lava-team
--test-plan lava-test-plan.yaml
response : "QA Reports submission failed"
What is that mean ?
Squad system setup missing or wrong command?
If that command are success respone, something show on squad explore?
Thanks
kchhero
--
*행동하지 않으면 변화는 없다*
Hi everyone,
I'm using LAVA at NXP and after talking with Neil and Milosz, I'd like to
use SQUAD.
I followed the instructions here
https://squad.readthedocs.io/en/latest/install.html
But when I try to run the command "squad", bash tells me the command is not
found.
I tried to run it through Python3 shell, but didn't work either.
Maybe I'm missing something obvious ?
Best regards,
Axel
Hi,
I just rolled out SQUAD 0.58 to production qa-reports.linaro.org. This
is more of a bugfix release so the changelog is pretty short:
* api:
- fix the /builds//report API
There were a couple of issues with the API. In detail:
- baseline parameter is now respected
- 'force' is now respected
- API doesn't crash when baseline is not None
There is still a problem with displaying the URL for the baseline.
It's unfortunately incorrect and it should be considered a known
issue. It doesn't affect the output stored in output_* fields. The fix
is almost ready and will be submitted for review shortly.
milosz
Hi,
We had an issue that tasks from one backend blocked tasks from other
backends. Maybe we can create as many queues as we have backends and
route the tasks to them? I'm not sure this is easy or even possible,
but just sending this idea out for comments.
milosz
Hello,
For Google Data Studio integration with SQUAD, I need to retrieve the
list of metrics for each project so I can let user select which project
and metric he would like to view in the studio. Something like this
here:
https://github.com/Linaro/squad/blob/master/squad/frontend/views.py#L447
only for all available projects for this user.
Any suggestions on the correct approach here? I was thinking just adding
a new API function in squad/api/views.py but I'd like an input on this
approach and/or other ideas are welcome.
Thanks,
--
Stevan Radaković | LAVA Engineer
Linaro.org <www.linaro.org> │ Open source software for ARM SoCs
Hi,
Getting the output of 'email' API for the build usually takes a long
time. In some cases we're close to hitting the 30 seconds timeout. I
think the timeout is inevitable when there are a lot of results with
big number of changes. In order to avoid timeout, maybe the API
should do the work in background? This would work as follows
1. GET call to /api/builds/<id>/email
2. server creates a 'cashed report' object in database and returns URL
for it immediately to the user
3. in the background server adds a report generation task to the queue
4. using the URL received in 2) user is able to retrieve the final
results or check the progress
5. once the result is generated it can be a short lived object in the
database (removed after 1 day for example)
Is this the solution we should aim for? The downside is that it
requires active polling from the client side.
milosz
Hi,
Antonio proposed a patch that sets default data retention policy to
180 days. It can be changed for each project in qa-reports separately
(extended or shortened). We don't have an option to keep the data
forever. IMHO this is a good idea, but I wanted to ask whether you
have a need to keep your results forever.
The PR in question is here:
https://github.com/Linaro/squad/pull/370
milosz
Hi,
I release new version of SQUAD plugins yesterday but I'm not rolling
them out yet. I think it makes sense to make a new SQUAD release on
Monday and roll out both at the same time. Any objections?
milosz
On Thu, 11 Oct 2018 at 14:36, Antonio Terceiro
<antonio.terceiro(a)linaro.org> wrote:
>
> On Thu, Oct 11, 2018 at 11:24:17AM +0100, Ryan Harkin wrote:
> > On Thu, 11 Oct 2018 at 10:35, Milosz Wasilewski <
> > milosz.wasilewski(a)linaro.org> wrote:
> >
> > > Hi,
> > >
> > > Antonio proposed a patch that sets default data retention policy to
> > > 180 days. It can be changed for each project in qa-reports separately
> > > (extended or shortened). We don't have an option to keep the data
> > > forever. IMHO this is a good idea, but I wanted to ask whether you
> > > have a need to keep your results forever.
> > >
> >
> > I don't need to keep my results forever. But what would be nice is to be
> > able to keep 180 days OR n sets of data.
> >
> > This way, release jobs, that only gets run once per month, for example,
> > could keep a history of the project over a longer period of time.
>
> Maybe we can disable the cleanup if the limit is set to 0. Then we can
> use 0 for these special cases when builds are not so frequent and we do
> need/want to keep results forever.
-1 so it's not confusing as 'delete immediately'?
milosz
Hi,
It turns out last release wasn't great. We're hitting the following error:
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: DETAIL: Key
(status_id)=(9919) already exists.
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: The above exception
was the direct cause of the following exception:
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: Traceback (most
recent call last):
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/celery/app/trace.py",
line 374, in trace_task
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: R = retval =
fun(*args, **kwargs)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/celery/app/trace.py",
line 629, in __protected_call__
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: return
self.run(*args, **kwargs)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/squad/core/tasks/notification.py",
line 49, in notification_timeout
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]:
send_status_notification(projectstatus)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/squad/core/notification.py",
line 194, in send_status_notification
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]:
send_admin_notification(status, project)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/squad/core/notification.py",
line 238, in send_admin_notification
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: if
NotificationDelivery.exists(status, subject, txt, html):
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/squad/core/models.py",
line 847, in exists
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: html=html_hash,
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/manager.py",
line 85, in manager_method
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: return
getattr(self.get_queryset(), name)(*args, **kwargs)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/query.py",
line 466, in get_or_create
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: return
self._create_object_from_params(lookup, params)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/query.py",
line 506, in _create_object_from_params
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: six.reraise(*exc_info)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/utils/six.py",
line 686, in reraise
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: raise value
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/query.py",
line 498, in _create_object_from_params
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: obj =
self.create(**params)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/query.py",
line 394, in create
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]:
obj.save(force_insert=True, using=self.db)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/base.py",
line 808, in save
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]:
force_update=force_update, update_fields=update_fields)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/base.py",
line 838, in save_base
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: updated =
self._save_table(raw, cls, force_insert, force_update, using,
update_fields)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/base.py",
line 924, in _save_table
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: result =
self._do_insert(cls._base_manager, using, fields, update_pk, raw)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/base.py",
line 963, in _do_insert
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: using=using, raw=raw)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/manager.py",
line 85, in manager_method
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: return
getattr(self.get_queryset(), name)(*args, **kwargs)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/query.py",
line 1076, in _insert
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: return
query.get_compiler(using=using).execute_sql(return_id)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/models/sql/compiler.py",
line 1113, in execute_sql
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]:
cursor.execute(sql, params)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/backends/utils.py",
line 64, in execute
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: return
self.cursor.execute(sql, params)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/utils.py",
line 94, in __exit__
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]:
six.reraise(dj_exc_type, dj_exc_value, traceback)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/utils/six.py",
line 685, in reraise
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: raise
value.with_traceback(tb)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: File
"/srv/qa-reports.linaro.org/lib/python3.5/site-packages/django/db/backends/utils.py",
line 64, in execute
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: return
self.cursor.execute(sql, params)
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]:
django.db.utils.IntegrityError: duplicate key value violates unique
constraint "core_notificationdelivery_status_id_key"
Oct 03 08:16:54 qa-reports-worker-1 celery[28075]: DETAIL: Key
(status_id)=(9919) already exists.
I'm trying to find a solution.
milosz