- Introduction
- Using job templates to automate jobs creation
- Use of the web interface
- Description of test suites
- /tests/overview - Customizable test overview page
- Review badges
- Show bug or label icon on overview if labeled gh#550
- Distinguish product and test issues bugref gh#708
- Build tagging
- Filtering test results and builds
- Highlighting job dependencies in 'All tests' table
- Show previous results in test results page gh#538
- Link to latest in scenario name gh#836
- Add `latest' query route gh#815
- Allow group overview query by result gh#531
- Add web UI controls to select more builds in group_overview gh#804
- More query parameters for configuring last builds gh#575
- Web UI controls to filter only tagged or all builds gh#807
- Carry over bugrefs from previous jobs in same scenario if still failing gh#564
- Pinning comments as group description
- Developer mode
- Job group editor gh#2111
- Use of the REST API
- Asset handling
- CLI interface
- Where to now?
This document provides additional information for use of the web interface or the REST API as well as administration information. For administrators it is recommend to have read the Installation Guide first to understand the structure of components as well as the configuration of an installed instance.
When testing an operating system, especially when doing continuous testing, there is always a certain combination of jobs, each one with its own settings, that needs to be run for every revision. Those combinations can be different for different 'flavors' of the same revision, like running a different set of jobs for each architecture or for the Full and the Lite versions. This combinational problem can go one step further if openQA is being used for different kinds of tests, like running some simple pre-integration tests for some snapshots combined with more comprehensive post-integration tests for release candidates.
This section describes how an instance of openQA can be configured using the options in the admin area to automatically create all the required jobs for each revision of your operating system that needs to be tested. If you are starting from scratch, you should probably go through the following order:
-
Define machines in 'Machines' menu
-
Define medium types (products) you have in 'Medium types' menu
-
Specify various collections of tests you want to run in the 'Test suites' menu
-
Define job groups in 'Job groups' menu for groups of tests
-
Select individual 'Job groups' and decide what combinations make sense and need to be tested
Machines, mediums, test suites and job templates can all set various configuration variables. The so called job templates within the job groups define how the test suites, mediums and machines should be combined in various ways to produce individual 'jobs'. All the variables from the test suite, medium, machine and job template are combined and made available to the actual test code run by the 'job', along with variables specified as part of the job creation request. Certain variables also influence openQA’s and/or os-autoinst’s own behavior in terms of how it configures the environment for the job. Variables that influence os-autoinst’s behavior are documented in the file doc/backend_vars.asciidoc in the os-autoinst repository.
In openQA we can parameterize a test to describe for what product it will run and for what kind of machines it will be executed. For example, a test suite kde can be run for any product that has the KDE software stack installed, like openSUSE-DVD-x86_64 and openSUSE-NET-i586, and can be tested in different x86-64 and i586 machines like 64bit, 64bit_USBBoot, 32bit. In this example we could have the following test scenarios considering that the “x86_64” flavor is not compatible with the 32bit machine:
-
openSUSE-DVD-x86_64-kde-64bit
-
openSUSE-DVD-x86_64-kde-64bit_USBBoot
-
openSUSE-NET-i586-kde-64bit
-
openSUSE-NET-i586-kde-64bit_USBBoot
-
openSUSE-NET-i586-kde-32bit
For every test scenario we need to configure a different instance of the test backend, for example os-autoinst, with a different set of parameters.
You need to have at least one machine set up to be able to run any tests. Those machines represent virtual machine types that you want to test. To make tests actually happen, you have to have an 'openQA worker' connected that can fulfill those specifications.
-
Name. User defined string - only needed for operator to identify the machine configuration.
-
Backend. What backend should be used for this machine. Recommended value is qemu as it is the most tested one, but other options (such as kvm2usb or vbox) are also possible.
-
Variables Most machine variables influence os-autoinst’s behavior in terms of how the test machine is set up. A few important examples:
-
QEMUCPU can be 'qemu32' or 'qemu64' and specifies the architecture of the virtual CPU.
-
QEMUCPUS is an integer that specifies the number of cores you wish for.
-
LAPTOP if set to 1, QEMU will create a laptop profile.
-
USBBOOT when set to 1, the image will be loaded through an emulated USB stick.
-
A medium type (product) in openQA is a simple description without any concrete meaning. It basically consists of a name and a set of variables that define or characterize this product in os-autoinst.
Some example variables used by openSUSE are:
-
ISO_MAXSIZE contains the maximum size of the product. There is a test that checks that the current size of the product is less or equal than this variable.
-
DVD if it is set to 1, this indicates that the medium is a DVD.
-
LIVECD if it is set to 1, this indicates that the medium is a live image (can be a CD or USB)
-
GNOME this variable, if it is set to 1, indicates that it is a GNOME only distribution.
-
PROMO marks the promotional product.
-
RESCUECD is set to 1 for rescue CD images.
A test suite consists of a name and a set of test variables that are used inside this particular test together with an optional description. The test variables can be used to parameterize the actual test code and influence the behaviour according to the settings.
Some sample variables used by openSUSE are:
-
BTRFS if set, the file system will be BtrFS.
-
DESKTOP possible values are 'kde' 'gnome' 'lxde' 'xfce' or 'textmode'. Used to indicate the desktop selected by the user during the test.
-
DOCRUN used for documentation tests.
-
DUALBOOT dual boot testing, needs HDD_1 and HDDVERSION.
-
ENCRYPT encrypt the home directory via YaST.
-
HDDVERSION used together with HDD_1 to set the operating system previously installed on the hard disk.
-
INSTALLONLY only basic installation.
-
INSTLANG installation language. Actually used only in documentation tests.
-
LIVETEST the test is on a live medium, do not install the distribution.
-
LVM select LVM volume manager.
-
NICEVIDEO used for rendering a result video for use in show rooms, skipping ugly and boring tests.
-
NOAUTOLOGIN unmark autologin in YaST
-
NUMDISKS total number of disks in QEMU.
-
REBOOTAFTERINSTALL if set to 1, will reboot after the installation.
-
SCREENSHOTINTERVAL used with NICEVIDEO to improve the video quality.
-
SPLITUSR a YaST configuration option.
-
TOGGLEHOME a YaST configuration option.
-
UPGRADE upgrade testing, need HDD_1 and HDDVERSION.
-
VIDEOMODE if the value is 'text', the installation will be done in text mode.
Some of the variables usually set in test suites that influence openQA and/or os-autoinst’s own behavior are:
-
HDDMODEL variable to set the HDD hardware model
-
HDDSIZEGB hard disk size in GB. Used together with BtrFS variable
-
HDD_1 path for the pre-created hard disk
-
RAIDLEVEL RAID configuration variable
-
QEMUVGA parameter to declare the video hardware configuration in QEMU
The job groups are the place where the actual test scenarios are defined by the selection of the medium type, the test suite and machine together with a priority.
The priority is used in the scheduler to choose the next job. If multiple jobs are scheduled and their requirements for running them are fulfilled the ones with a lower value for the priority are triggered. The id is the second sorting key: Of two jobs with equal requirements and same priority the one with lower id is triggered first.
Job groups themselves can be created over the web UI as well as the REST API. Job groups can optionally be nested into categories. The display order of job groups and categories can be configured by drag-and-drop in the web UI.
The scenario definitions within the job groups can be created and configured by different means:
-
A simple web UI wizard which is automatically shown for job groups when a new medium is added to the job group.
-
An intuitive table within the web UI for adding additional test scenarios to existing media including the possibility to configure the priority values.
-
The scripts
openqa-load-templates
andopenqa-dump-templates
to quickly dump and load the configuration from custom plain-text dump format files using the REST API. -
Using declarative schedule definitions in the YAML format using REST API routes or an online-editor within the web UI including a syntax checker.
Any variable defined in Test Suite, Machine, Product or Job Template table can refer to another variable using this syntax: %NAME%. When the test job is created, the string will be substituted with the value of the specified variable at that time.
For example this variable defined for Test Suite:
PUBLISH_HDD_1 = %DISTRI%-%VERSION%-%ARCH%-%DESKTOP%.qcow2
may be expanded to this job variable:
PUBLISH_HDD_1 = opensuse-13.1-i586-kde.qcow2
It’s possible to define the same variable in multiple places that would all be used for a single job - for instance, you may have a variable defined in both a test suite and a product that appear in the same job template. The precedence order for variables is as follows (from lowest to highest):
-
Product
-
Machine
-
Test suite
-
Job template
-
API POST query parameters
That is, variable values set as part of the API request that triggers the jobs will 'win' over values set at any of the other locations.
If you need to override this precedence - for example, you want the value set in one particular test suite to take precedence over a setting of the same value from the API request - you can add a leading + to the variable name. For instance, if you set +VARIABLE = foo in a test suite, and passed VARIABLE=bar in the API request, the test suite setting would 'win' and the value would be foo.
If the same variable is set with a + prefix in multiple places, the same precedence order described above will apply to those settings.
Note that the WORKER_CLASS variable is not overridden in the way described above. Instead multiple occurrences are combined.
In general the web UI should be intuitive or self-explanatory. Look out for the little blue help icons and click them for detailed help on specific sections.
Some pages use queries to select what should be shown. The query parameters are generated on clickable links, for example starting from the index page or the group overview page clicking on single builds. On the query pages there can be UI elements to control the parameters, for example to look for more older builds or only show failed jobs or other settings. Additionally, the query parameters can be tweaked by hand if you want to provide a link to specific views.
Test suites can be described using API commands or the admin table for any operator using the web UI.
If a description is defined, the name of the test suite on the tests overview page shows up as a link. Clicking the link will show the description in a popup. The same syntax as for comments can be used, that is Markdown with custom extensions such as shortened links to ticket systems.
The overview page is configurable by the filter box. Also, some additional query parameters can be provided which can be considered advanced or experimental. For example specifying no build will resolve the latest build which matches the other parameters specified. Specifying no group will show all jobs from all matching job groups. Also specifying multiple groups works, see the following example.
Specifying multiple groups with no build will yield the latest build of the first group. This can be useful to have a static URL for bookmarking.
Based on comments in the individual job results for each build a certificate icon is shown on the group overview page as well as the index page to indicate that every failure has been reviewed, e.g. a bug reference or a test issue reason is stated:
Show bug or label icon on overview if labeled gh#550
-
Show bug icon with URL if mentioned in test comments
-
Show bug or label icon on overview if labeled
For bugreferences write <bugtracker_shortname>#<bug_nr>
in a comment, e.g. "bsc#1234", for generic labels use label:<keyword>
where <keyword>
can be any valid character up to the next whitespace, e.g. "false_positive". The keywords are not defined within openQA itself. A valid list of keywords should be decided upon within each project or environment of one openQA instance.
Related issue: #10212
'Hint:' You can also write (or copy-paste) full links to bugs and issues. The links are automatically changed to the shortlinks (e.g. https://progress.opensuse.org/issues/11110
turns into poo#11110). Related issue: poo#11110
Also github pull requests and issues can be linked using the generic format `<marker>[#<project/repo>]#<id>`, e.g. gh#os-autoinst/openQA#1234, see gh#973
All issue references are stored within the internal database of openQA. The status can be updated using the /bugs
API route for example using external tools.
Distinguish product and test issues bugref gh#708
“progress.opensuse.org” is used to track test issues, bugzilla for product issues, at least for SUSE/openSUSE. openQA bugrefs distinguish this and show corresponding icons
Based on comments on the group overview individual builds can be tagged. As 'build' by themselves do not own any data the job group is used to store this information. A tag has a build to link it to a build. It also has a type and an optional description. The type can later on be used to distinguish tag types.
The generic format for tags is
tag:<build_id>:<type>[:<description>], e.g. tag:1234:important:Beta1.
The more recent tag always wins.
A 'tag' icon is shown next to tagged builds together with the description on the group_overview page. The index page does not show tags by default to prevent a potential performance regression. Tags can be enabled on the index page using the corresponding option in the filter form at the bottom of the page.
As builds can now be tagged we come up with the convention that the 'important' type - the only one for now - is used to tag every job that corresponds to a build as 'important' and keep the logs for these jobs longer so that we can always refer to the attached data, e.g. for milestone builds, final releases, jobs for which long-lasting bug reports exist, etc.
At the top of the test results overview page is a form which allows filtering tests by result, architecture and TODO-status.
There is also a similar form at the bottom of the index page which allows filtering builds by group and customizing the limits.
When hovering over the branch icon after the test name children of the job will be highlighted blue and parents red. So far this only works for jobs displayed on the same page of the table.
Show previous results in test results page gh#538
On a tests result page there is a tab for “Next & previous results” showing the result of test runs in the same scenario. This shows next and previous builds as well as test runs in the same build. This way you can easily check and compare results from before including any comments, labels, bug references (see next section). This helps to answer questions like “Is this a new issue”, “Is it reproducible”, “has it been seen in before”, “how does the history look like”.
Querying the database for former test runs of the same scenario is a rather costly operation which we do not want to do for multiple test results at once but only for each individual test result (1:1 relation). This is why this is done in each individual test result and not for a complete build.
Related issue: #10212
Screenshot of the feature:
Link to latest in scenario name gh#836
Find the always latest job in a scenario with the link after the scenario name in the tab “Next & previous results” Screenshot: image::images/test_details-link_to_latest.png[Link to latest in scenario]
Add `latest' query route gh#815
Should always refer to most recent job for the specified scenario.
-
have the same link for test development, i.e. if one retriggers tests, the person has to always update the URL. If there would be a static URL even the browser can be instructed to reload the page automatically
-
for linking to the always current execution of the last job within one scenario, e.g. to respond faster to the standard question in bug reports “does this bug still happen?”
Examples:
-
tests/latest?distri=opensuse&version=13.1&flavor=DVD&arch=x86_64&test=kde&machine=64bit
-
tests/latest?flavor=DVD&arch=x86_64&test=kde
-
tests/latest?test=foobar
- this searches for the most recent job using test_suite `foobar' covering all distri, version, flavor, arch, machines. To be more specific, add the other query entries.
Allow group overview query by result gh#531
This allows e.g. to show only failed builds. Could be included like in http://lists.opensuse.org/opensuse-factory/2016-02/msg00018.html for “known defects”.
Example: Add query parameters like …&result=failed&arch=x86_64
to show
only failed for the single architecture selected.
Add web UI controls to select more builds in group_overview gh#804
The query parameter `limit_builds' allows to show more than the default 10 builds on demand. Just like we have for configuring previous results, the current commit adds web UI selections to reload the same page with higher number of builds on demand. For this, the limit of days is increased to show more builds but still limited by the selected number.
Example screenshot:
More query parameters for configuring last builds gh#575
By using advanced query parameters in the URLs you can configure the search for builds. Higher numbers would yield more complex database queries but can be selected for special investigation use cases with the advanced query parameters, e.g. if one wants to get an overview of a longer history. This applies to both the index dashboard and group overview page.
Example to show up to three week old builds instead of the default two weeks with up to 20 builds instead of up to 10 being the default for the group overview page:
http://openqa/group_overview/1?time_limit_days=21&limit_builds=20
Web UI controls to filter only tagged or all builds gh#807
Using a new query parameter `only_tagged=[0|1]' the list can be filtered, e.g. show only tagged (important) builds.
Example screenshot:
Related issue: #11052
Carry over bugrefs from previous jobs in same scenario if still failing gh#564
It is possible to label all failing tests but tedious to do by a human user as many failures are just having the same issue until it gets fixed. It helps if a label is preserved for a build that is still failing. This idea is inspired by https://wiki.jenkins-ci.org/display/JENKINS/Claim+plugin and has been activated for bugrefs.
Does not carry over bugrefs over passes: After a job passed a new issue in a subsequent fail is assumed to be failed for a different reason.
Related issue: #10212
This is possible by adding the keyword pinned-description
anywhere in
a comment on the group overview page. Then the comment will be shown at
the top of the group overview page. However, it only works as operator
or admin.
The developer mode allows to:
-
Create or update needles from assert_screen mismatches ("re-needling")
-
Pause the test execution (at a certain module) for manual investigation of the SUT
It can be accessed via the "Live View" tab of a running test. Only registered users can take control over a tests. Basic instructions and buttons providing further information about the different options are already contained on the web page itself. So I am not repeating that information here and rather explain the overall workflow.
In case the developer mode in not working on your instance, try to follow the steps for debugging the developer mode under 'Pitfalls'.
-
In case a new needles should be created, add the corresponding +assert_screen+s to your test.
-
Start the test with the +assert_screen+s which are supposed to fail.
-
Select "Pause on assert_screen failure and confirm.
-
Wait until the test has paused. There is a button to skip the current timeout to speed this up.
-
A button for acessing the needle editor should occur. It may take a few seconds till it occurs because the screenshots created so far need to be uploaded from the worker to the web UI. Of course it is also possible to go back to the "Details" tab to create a new needle from any previous screenshot/match available.
-
After creating the new needle, click the resume button to test whether it worked.
Steps 4. to 6. can be repeated for further +assert_screen+s/needles without restarting the test.
Job group editor gh#2111
Scenarios are defined as part of a job group. The Edit job group
button exposes the editor.
By default the scenarios are listed in a table per medium. Changes to the priority are applied immediately. Machines can be added by selecting the architecture column and picking a machine from the list. Remove scenarios by removing all of their machines. Add new scenarios via the blue Plus icon at the top of the table.
The Edit YAML
button can be used to reveal the YAML editor and migrate a group. After
saving for the first time, the group can only be configured in YAML. The table view will not
be shown anymore.
Note that making a backup before migrating groups may be a good idea, for example using
openqa-dump-templates
.
Settings can be specified as a key/value pair for each scenario. There is no equivalent in the table view so you need to migrate groups to use this feature.
YAML-only groups need to be updated through the YAML editor or the YAML-related REST API routes.
openQA includes a client script which - depending on the distribution - is
packaged independantly if you just want to interface with an existing openQA
instance without needing to install the full package. Call openqa-client
--help
for help.
Basics are described in the Getting Started guide.
Tests can be triggered over multiple ways, using openqa-clone-job
,
jobs post
, isos post
as well as retriggering existing jobs or whole media
over the web UI.
If one wants to recreate an existing job from any publically available openQA
instance the script openqa-clone-job
can be used to copy the necessary
settings and assets to another instance and schedule the test. For the test to
be executed it has to be ensured that matching ressources can be found, for
example a worker with matching WORKER_CLASS
must be registered. More details
on openqa-clone-job
can be found in
Writing Tests.
Single jobs can be spawned using the jobs post
API route. All necessary
settings on a job must be supplied in the API request. The "openQA client" has
examples for this.
The most common way of spawning jobs on production instances is using the
isos post
API route. Based on previously defined settings for media, job
groups, machines and test suites jobs are triggered based on template
matching. The Getting Started guide already
mentioned examples. Additionally to the necessary template matching parameters
more parameters can be specified which are forwarded to all triggered jobs.
There are also special parameters which only have an influence on the way the
triggering itself is done. These parameters all start with a leading
underscore but are set as request parameters in the same way as the other
parameters.
_OBSOLETE |
Obsolete jobs in older builds with same DISTRI and VERSION (The default behavior is not obsoleting). With this option jobs which are currently pending, for example scheduled or running, are cancelled when a new medium is triggered. |
_DEPRIORITIZEBUILD |
Setting this switch to '1' will deprioritize the unfinished jobs of old builds, and it will obsolete the jobs once the configurable limit of priority is reached. |
_DEPRIORITIZE_LIMIT |
The configurable limit of priority up to which jobs
should be deprioritized. Needs |
_ONLY_OBSOLETE_SAME_BUILD |
Only obsolete (or deprioritize) jobs for the same BUILD.
This is useful for cases where a new build appearing does not necessarily
mean existing jobs for earlier builds with the same DISTRI and VERSION are
no longer interesting, but you still want to be able to re-submit jobs for a
build and have existing jobs for the exact same build obsoleted. Needs |
_SKIP_CHAINED_DEPS |
Do not schedule parent test suites which are specified in START_AFTER_TEST. |
_GROUP |
Job templates not matching the given group name are ignored. Does not affect obsoletion behavior. |
_GROUP_ID |
Same as |
_PRIORITY |
Sets the priority for the new jobs (which otherwise defaults to the priority of the job template) |
Example for _DEPRIORITIZEBUILD
and _DEPRIORITIZE_LIMIT
.
openqa-client isos post ISO=my_iso.iso DISTRI=my_distri FLAVOR=sweet \
ARCH=my_arch VERSION=42 BUILD=1234 \
_DEPRIORITIZEBUILD=1 _DEPRIORITIZE_LIMIT=120 \
Job groups can be queried via the experimental REST API:
api/v1/experimental/job_templates_scheduling
The GET request will get the YAML for one or multiple groups while a POST request conversely updates the YAML for a particular group.
Two scripts using these routes can be used to import and export YAML templates:
openqa-dump-templates --json --group test > test.json
openqa-load-templates --group test test.json
Multiple parameters exist to reference "assets" to be used by tests. "Assets" are essentially content that is stored by the openQA web-UI and provided to the workers; when sending jobs to os-autoinst on the workers, openQA adjusts the parameter values to refer to an absolute path where the worker will be able to access the content. Things that are typically assets include the ISOs and other images that are tested, for example.
Some assets can also be produced by a job, sent back to the web-UI, and used by a later job (see explanation of 'storing' and 'publishing' assets, below). Assets can also be seen in the web-UI and downloaded directly (though there is a configuration option to hide some or all asset types from public view in the web-UI).
The parameters treated as assets are as follows. Where you see e.g. ISO_n
, that means ISO_1
,
ISO_2
etc. will all be treated as assets.
-
ISO
(typeiso
) -
ISO_n
(typeiso
) -
HDD_n
(typehdd
) -
UEFI_PFLASH_VARS
(typehdd
) (in some cases, see below) -
REPO_n
(typerepo
) -
ASSET_n
(typeother
) -
KERNEL
(typeother
) -
INITRD
(typeother
)
The values of the above parameters are expected to be the name of a file - or, in the case of
REPO_n
, a directory - that exists under the path /var/lib/openqa/share/factory on the openQA
web-UI. That path has subdirectories for each of the asset types, and the file or directory must
be in the correct subdirectory, so e.g. the file for an asset HDD_1
must be under
/var/lib/openqa/share/factory/hdd. You may create a subdirectory called fixed for any asset
type and place assets there (e.g. in /var/lib/openqa/share/factory/hdd/fixed for hdd
-type
assets): this exempts them from the automatic cleanup described under 'Asset cleanup' above.
Non-fixed assets are always subject to the cleanup.
UEFI_PFLASH_VARS
is a special case: whether it is treated as an asset depends on the value. If
the value looks like an absolute path (starts with /
), it will not be treated as an asset (and
so the value should be an absolute path for a file which exists on the relevant worker system(s)).
Otherwise, it is treated as an hdd
-type asset. This allows tests to use a stock base image
(like the ones provided by edk2) for a simple case, but also allows a job to upload its image on
completion - including any changes made to the UEFI variables during the execution of the job -
for use by a child job which needs to inherit those changes.
You can also use special suffixes to the basic parameter forms to access some special handling for assets.
_URL |
Before starting these jobs, try to download these assets into the relevant asset directory
of the openQA web-UI from trusted domains specified in /etc/openqa/openqa.ini. For e.g.,
|
_DECOMPRESS_URL |
Specify a compressed asset to be downloaded that will be uncompressed by openQA.
For e.g. |
Assets may be shared between the web-UI and the workers by having them literally use a shared filesystem (this used to be the only option), or by having the workers download them from the server when needed and cache them locally. See 'Asset Caching' in theInstalling guide for more on this.
HDD_n
assets can be 'stored' or 'published' by a job, and UEFI_PFLASH_VARS
assets can be
'published'. These both mean that if the job completes successfully, the resulting state of those
disk assets will be sent back to the web-UI and made available as an hdd
-type asset. To 'store'
an asset, you can specify e.g. STORE_HDD_1
. To 'publish' it, you can specify e.g.
PUBLISH_HDD_1
or PUBLISH_PFLASH_VARS
. If you specify PUBLISH_HDD_1=updated.qcow2
, the
HDD_1
disk image as it exists at the end of the test will be uploaded back to the web-UI and
stored under the name updated.qcow2
; any other job can then specify HDD_1=updated.qcow2
to use
this published image as its HDD_1
.
The difference between 'storing' and 'publishing' is that when 'storing' an asset, it will be
altered in some way (currently, by prepending the job ID to the filename) to associate it with
the particular job that produced it. That means that many jobs can 'store' an asset under "the
same name" without conflicting. Of course, that would seem to make it hard for other jobs to use
the 'stored' image - but for "chained" jobs, the reverse operation is done transparently. This
all means that a 'parent' job template can specify STORE_HDD_1=somename.qcow2
and its 'child'
job template(s) can specify HDD_1=somename.qcow2
, and everything will work, without multiple
runs of the same jobs overwriting the asset. For more on "chained" jobs, see 'Job dependencies'
in the Writing Tests guide.
When using this mechanism you will often also want to use the 'Variable expansion' mechanism described in the Getting Started guide.
For more information on assets, see 'Asset handling' below.
Assets like ISO files consume a huge amount of disk space. Therefore openQA removes assets automatically according to configurable limits.
This section provides an overall description of the cleanup strategy and how to configure the limits. Cleanup-related parameter for the REST API can be found in the 'Asset handling' section under 'Use of the REST API'.
openQA frequently checks whether assets need to be removed according to the configured limits.
To find out whether an asset should be removed, openQA determines by which job groups the asset is used. If at least one job within a certain job group is using an asset, the asset is considered to be used by that group.
So an asset can be accounted to multiple groups. The assets table which is accessible via the admin menu shows these groups for each asset and also the latest job.
If the size limit for assets by a certain group is exceeded, openQA will remove assets accounted to that group:
-
Assets belonging to old jobs are preferred.
-
Assets belonging to jobs which are still scheduled or running are not considered.
-
Assets which are also accounted to another group that has still space left are not considered.
Assets which do not belong to any group are removed after a configurable duration. Keep in mind that this behavior is also enabled on local instances and affects all cloned jobs (unless cloned into a job group).
'Fixed' assets - those placed in the fixed subdirectory of the relevant asset directory - are counted against the group size limit, but are never cleaned up. This is intended for things like base disk images which must always be available for a test to work.
To configure the maximum size for the assets of a group, open 'Job groups' in the operators menu and select a group. The size limit for assets can be configured under 'Edit job group properties'. It also shows the size of assets which belong to that group and not to any other group.
Job groups inherit the size limit from their parent group unless the limit is set explicitely. The default size limit for groups can be adjusted in the default_group_limits section of the openQA config file.
Beside the daemon argument to run the actual web service the openQA startup script /usr/share/openqa/script/openqa supports further arguments.
For a full list of those commands, just invoke /usr/share/openqa/script/openqa -h. This also works for sub-commands(e.g. /usr/share/openqa/script/openqa minion -h, /usr/share/openqa/script/openqa minion job -h).
Note that prefork is only supported for the main web service but not for other services like the live view handler.
For test developers it is recommended to continue with the Test Developer Guide.