All,
I'm testing some feature about associated log lines in lava, job.yaml is as follows.
Job.yaml:
- test:
connection: lxc
definitions:
- from: inline
name: my_suite
path: me.yml
repository:
metadata:
format: Lava-Test Test Definition 1.0
name: smoke-case-run
description: Run smoke case
run:
steps:
- lava-test-case "Case001" --shell 'echo wow'
- lava-test-case "Case002" --shell 'ls'
namespace: lxcEnv
timeout:
minutes: 10
1. Then, let's see next picture of Case001:
[cid:image002.jpg@01D59322.5F8C2990]
it said startline=153, lastline=154, that's next:
Line153 is "Received signal: <STARTTC> Case001", Line 154 is "Received signal: <ENDTC> Case001".
You could see in fact the log "wow" not in the 2 lines.
[cid:image008.jpg@01D59322.5F8C2990]
2. let's see next picture of Case002:
[cid:image009.jpg@01D59322.5F8C2990]
It said startline=160, lastline=163, that's next:
[cid:image010.jpg@01D59322.5F8C2990]
It's ok, that between Ln160 & Ln163 I got the output of `ls`.
What happened for "Case001"? The startline & endline not correct. It seems if I directly do "echo", the info is wrong. while "execute command", the info is correct.
Of course my case will not just do "echo", just I want to know if any risk for me to fetch the wrong START/LAST lines in other scenario? (I will use script to fetch it with xmlrpc).
Hi,
When test job uses boot action with method: u-boot there was an option
to tell it which boot command to call in u-boot shell. This was done
using 'type' parameter. Example from docs:
- boot:
method: u-boot
commands: nfs
type: bootz
prompts:
- 'root@debian:~#'
This parameter was deprecated for more than a year. I just submitted a
patch to remove this feature. If you're using it, please consider
changing to setup where kernel image type is set using deploy section.
Example:
- deploy:
timeout:
minutes: 4
to: tftp
os: oe
kernel:
url: 'https://example.com/zImage'
type: 'zimage'
dtb:
url: 'https://example.com/dtb.dtb'
nfsrootfs:
url: 'https://example.com/rootfs.tar.xz'
compression: xz
Pull request: https://git.lavasoftware.org/mwasilew/lava/pipelines/5794
milosz
Does lava has roadmap to separate resource management to some open source mechenisim, so it could share resource with other framework?
Like spark/hadoop could use mesos/yarn to share resource.
Hi Team,
While Running LAVA in localhost , facing the error while accessing the UI
related to Devices.
Here I have attached the screenshot , please find the attachment.
Please help me to solve this issue.
Thanks & Regards,
Dhanunjaya. P
Hi Team,
I was trying to run the first Job with the qemu device type , continuously
the job listing as submitted , not getting into the Running status to
observe the behaviour.
As a Initial step , How I need to run the Scheduler, what will the mac
address need to update in the device configuration file while running jobs
in the localhost.
Regards,
Dhanunjaya. P
Hi, everyone!
I'm using fastboot from lxc to flash image to device.
I have link to archive of my image.
How better operate with it?
In lxc test download archive, unzip it to lxc, and flash it ?
I've tried this option. And use lxc:/// url type. But I cant find it, can you suggest how to do it?
my job description : https://pastebin.com/LUmL6Qv3
and I see my downloads in lxc file system. I just removed other commands from job.
I understand,that it is wrong.
I found in glossary
http://59.144.98.45/static/docs/v2/glossary.html#term-lava-lxc-home
But I cant connect path from lxc-container, and lxc:///
Please, help.
Ilya
Hi Team,
i tried to add Device & Jobs to the LAVA instance for the device type
"qemu" , continuously getting the Configuration Error:missing or invalid
template(jobs requesting this device type (qemu) will not be able to start
until a template is available on the master).
Can you please let me know how to add Device Dictionary to the LAVA
instance.
Thanks in advance.
Regards,
Dhanunjaya. P
Hi, with lava2019.09 will has next issue, seems ok in 2019.01:if request is running, click something like next:http://lava_ip/results/171It will show:500 Internal Server Errorlist index out of range
Oops, something has gone wrong!
But if the request finish, the link ok again.