If there is no "dameon" program in DUT. How to add/update the program running in DUT for example application test program or linux kernel or uboot etc.?
Best regards,
Hi,
I have browsed the website documentation of lava overall. I think the DUT(Device Under Test) should has "dameon" program to communicate with the Worker Dispatcher module. But I did not find where the DUT "dameon" program source code is,and how to build&install this DUT "dameon" program.
thanks,
Jiang Lao
Hello Lava Users,
We are creating queries in LAVA and adding multiple conditions to query.For
example in our yaml job definition we have included os field in metadata
section which will have different values.For example following is the
metadata section for 2 different jobs
Job1:
metadata:
description: '"Build SiemensIPC-327E target with latest build"'
os: Debian
device: imx6q
Build_ID: QA-BUILD-F0150
Job2:
metadata:
description: '"Build SiemensIPC-327E target with latest build"'
os: Ubuntu
device: imx6q
Build_ID: QA-BUILD-F0150
Now with queries I am trying to just filter out job running with Build_ID(
QA-BUILD-F0150 ) and on only os debian.
Entity Field Operator Value
namedtestattribute
Build_ID
exact
QA-BUILD-F0150
namedtestattribute
os
exact
Debian
When I am creating queries with above information, as a user when I run the
query the query results will just list Job1 information but at present in
LAVA both the jobs are getting listed. Is this the expected behavior?The
LAVA server version used is 2018.5.post1 release.
Thanks,
Hemanth.
Hello everyone,
is there any documentation on how Linaro uses Salt for configuring LAVA nodes? I know the infrastructure code is hosted at https://git.linaro.org/lava/lava-lab.git, but since I am new to Salt (and configuration management in general), I would love to have some kind of starting point.
Where is this repository checked out? Who applies the defined Salt states and when? Are there any automatic processes happening when someone commits to the repository (e.g. changing a device type or health check job)? What is Linaro's workflow in these cases?
Mit freundlichen Grüßen / Best regards
Tim Jaacks
DEVELOPMENT ENGINEER
Garz & Fricke GmbH
Tempowerkring 2
21079 Hamburg
Direct: +49 40 791 899 - 55
Fax: +49 40 791899 - 39
tim.jaacks(a)garz-fricke.com
www.garz-fricke.com
SOLUTIONS THAT COMPLETE!
Sitz der Gesellschaft: D-21079 Hamburg
Registergericht: Amtsgericht Hamburg, HRB 60514
Geschäftsführer: Matthias Fricke, Manfred Garz
Hi,
I just install lava-server to latest release, 2018.5-post1 & created a local Django account. I unable to login using lava webpage from my local machine. Lava webpage did not prompt any error message and I have no idea how to debug this problem.
Any idea?
Regards,
Alim Hussin
Hello,
In my LAVA pipeline I have the following snippet:
notify:
recipients:
- to:
method: email
mail: diego.russo(a)arm.com
criteria:
status: finished
verbosity: verbose
When the job ends though I'm not receiving any email. I've looked at the documentation, but I couldn’t find anything about mail configuration. Logs also are not very helpful to debug this issue.
How/where can I configure the email client? I'm also interested in changing the "from:" field.
Thanks
--
Diego Russo
Staff Software Engineer - diego.russo(a)arm.com
Direct Tel. no: +44 1223 405920
Main Tel. no: +44 1223 400400
ARM Ltd. CPC1, Capital Park, Cambridge Road, Fulbourn, CB21 5XE, United Kingdom
http://www.diegor.co.uk - http://twitter.com/diegorhttp://www.linkedin.com/in/diegor
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hello Lava-Users,
I have UEFI based x86 target board which I want to connect to LAVA and
execute tests.
When I go through
https://validation.linaro.org/static/docs/v2/integrate-uefi.html it
specifies different mechanism available.I am confused here as I am not
completely clear on differences between systems mentioned.
If I just know UEFI implementations method of target is it enough to select
which method can be used for booting.
What are the things I need to know before concluding the method to be used
for booting x86 based target board?
Thanks,
Hemanth.
Hey,
Where I work, we've been using LAVA since 2014 to test our in-house Linux distribution. Our devices are typically "low-end" devices with 128 to 256 MB RAM and up to 512 MB NAND flash. We use LAVA to test our BSPs, e.g lots of connectivity/interface/IO tests. We also use it for stability-testing and performance testing (like measuring the application context switch time or Ethernet TX rates).
For a while, there has been a growing concern within our team that LAVA might not be ideal for our testing needs. Right now, the team is discussing if we should drop LAVA and use something else. There is even talks about developing our own test framework. I personally like the idea behind LAVA but also agree that it has been a bumpy road these past 4 years. Due to various bugs and missing features, we've several times been forced to upgrade to an unstable version of LAVA just to get normal operations working. Two times we've lost the entire test database because we were unable to recover from a LAVA upgrade. In those cases, it was easier for us to just "start over". Today we use LAVA 2018.02. I've compiled a list that summarize the most pressing issues we've experience with LAVA:
1. Test development is slow. Several members of my team avoids test development because the process of developing a test with LAVA is tedious / very timeconsuming. I think it mostly boils down to making a change, pushing it to a Git repo, submitting a job, running the job and then watching the LAVA job output for result. This will in our environment take several minutes, just to verify a change in a test.
I'm aware of the guidelines for making portable tests and I personally think we can be a lot better at it for single-node tests which could enable us to run testscripts on local devices, but we have also quite a number of multinode jobs that we find are difficult to run in any other environment than LAVA. We've also tried using hacksessions to speed up development (e.g you edit the tests on the DUT and then synchronize it back to your host once you're happy). This works quite well, but feels a bit hacky and if the hacksession timeout hits, you lose all your work ;-)
2. Can't test bootloaders. Several of our hardware contain FPGAs and the FPGA images / "firmware" is tightly bundled with the bootloader. In addition to configuring the FPGA, the bootloader also contains in-house developed code for firmware update that we would like to autotest. We have a _lot_ of bootloader variants and we need a way of testing it along with the Linux system. Our current setup is that we manually flash bootloaders in our LAVA lab and then cross our fingers that the Linux system we test on the device is compatible with the bootloader. The ideal situation for us would be to always test the Linux system and the matching bootloader together. Granted, the better solution would be to move away the FPGA loading from the bootloader, but this is a design chosen by our SoC provider and we prefer to keep it.
We also manage a "LTS" branch of our Linux distro. We support it for several years and we need to ensure our test setup can test both our "master" branch and our LTS branch. With our current setup, this is not possible because all devices in our lab runs a bootloader that was manually flashed at some arbitrary time.
We've considered setting up several hardware of the same type, but with different bootloaders and then let LAVA treat them as different device types. This would work but our lab would fill up fast and the usage of each device would be low.
We also tried making jobs that boot a golden Linux system, write the software under test (including bootloader), reboot and run tests. This did work, but required customization per device since the test has to know where to write the bootloader. We would rather put this information into the LAVA device type definition somehow.
3. Can't write to NAND. Our devices are NAND-based and we use UBIFS ontop of UBI. We have not found a way for LAVA to write NAND because the LAVA mechanism that embeds stuff into the rootfs before deployment doesn't support UBIFS. At the moment, we ramboot our devices but we are now at a point where our devices OOM because they don't have enough RAM to fit both the rootfs and running the tests. Our current solution is to split the job into several jobs that each run a smaller amount of tests, but this is less than ideal because it makes our test run slower (we need to reboot) and it is a bit annoying that test results are spread across several jobs.
We have our own deployment tool that would be nice to integrate into LAVA as a deployment method. It accepts a kernel, rootfs, DT and bootloader and will write it using TFTP or DFU (USB) depending on target. To avoid forking all of LAVA in order to implement such deploy method, is there any plugin architecture that allows us to install additional boot methods alongside the LAVA packages?
I'd love to get your views on these issues and if there is a solution when using LAVA.
Best regards, Magnus.