Difference between revisions of "CKI survey response"

From eLinux.org
Jump to: navigation, search
(test definitions)
(Update old info (fully migrated to gitlab.com && some details))
 
(12 intermediate revisions by 2 users not shown)
Line 44: Line 44:
 
Does your test system:
 
Does your test system:
 
* provide a set of existing tests? '''Yes'''
 
* provide a set of existing tests? '''Yes'''
** if so, how many? '''Currently 24 tests open sourced but there are some more running internally. Some tests' reliability is a WIP.'''
+
** if so, how many? '''Currently about 120 tests open sourced
 +
<b>There are some more running internally. Some tests' reliability is a WIP.
 +
Some tests are architecture or tree specific, so a full test set is not
 +
executed for each run.</b>'''
  
 
==== build management ====
 
==== build management ====
 
Does your test system:
 
Does your test system:
* build the software under test (e.g. the kernel)? '''Yes, unless testing already build RPMs (eg. Fedora kernels built in koji
+
* build the software under test (e.g. the kernel)? '''Yes, unless testing already build RPMs (eg. Fedora kernels built in koji and COPR)'''
and COPR)'''
 
 
* build the test software? (e.g. dbench)? '''Yes, if needed'''
 
* build the test software? (e.g. dbench)? '''Yes, if needed'''
* build other software (such as the distro, libraries, firmware)? '''No, everything is installed by [https://beaker-project.org/ Beaker] from
+
* build other software (such as the distro, libraries, firmware)? '''No, everything is installed by [https://beaker-project.org/ Beaker] from images and distro repositories'''
images and distro repositories'''
 
 
* support cross-compilation? '''Yes'''
 
* support cross-compilation? '''Yes'''
 
* require a toolchain or build system for the SUT? '''Yes'''
 
* require a toolchain or build system for the SUT? '''Yes'''
 
* require a toolchain or build system for the test software? '''Test-dependent, some need to be compiled and some not'''
 
* require a toolchain or build system for the test software? '''Test-dependent, some need to be compiled and some not'''
 
* come with pre-built toolchains?
 
* come with pre-built toolchains?
'''Yes, RPMs installed in Beaker before testing starts. Build containers with
+
<b>Yes, RPMs/tarballs installed in Beaker before testing starts. Build containers with
 
toolchains for kernel compilation are also available at
 
toolchains for kernel compilation are also available at
[https://gitlab.com/cki-project/cki-containers/]'''
+
[https://gitlab.com/cki-project/containers/]</b>
 
* store the build artifacts for generated software? '''Yes'''
 
* store the build artifacts for generated software? '''Yes'''
 
** in what format is the build metadata stored (e.g. json)? '''GitLab CI's variables and pipelines'''
 
** in what format is the build metadata stored (e.g. json)? '''GitLab CI's variables and pipelines'''
Line 67: Line 68:
 
==== Test scheduling/management ====
 
==== Test scheduling/management ====
 
Does your test system:
 
Does your test system:
* check that dependencies are met before a test is run? '''Yes, each test is responsible for listing its dependencies and aborts
+
* check that dependencies are met before a test is run? '''Yes, each test is responsible for listing its dependencies and aborts if they can't be installed'''
if they can't be installed'''
 
 
* schedule the test for the DUT? '''Yes'''
 
* schedule the test for the DUT? '''Yes'''
 
** select an appropriate individual DUT based on SUT or test attributes? '''Yes'''
 
** select an appropriate individual DUT based on SUT or test attributes? '''Yes'''
** reserve the DUT? '''Yes (for the test duration, but can be configured to extend the
+
** reserve the DUT? '''Yes (for the test duration, but can be configured to extend the reservation)'''
reservation)'''
 
 
** release the DUT? '''Yes'''
 
** release the DUT? '''Yes'''
 
* install the software under test to the DUT? '''Yes'''
 
* install the software under test to the DUT? '''Yes'''
Line 85: Line 84:
  
 
'''Most of the above is handled by [https://beaker-project.org/ Beaker]'''
 
'''Most of the above is handled by [https://beaker-project.org/ Beaker]'''
 
  
 
==== DUT control ====
 
==== DUT control ====
 
Does your test system:
 
Does your test system:
 
* store board configuration data? '''Yes, stored in Beaker'''
 
* store board configuration data? '''Yes, stored in Beaker'''
** in what format?
+
** in what format? '''No idea, we aren't Beaker developers'''
 
+
* store external equipment configuration data? '''No idea, we aren't Beaker developers'''
No idea, we aren't Beaker developers
 
 
 
* store external equipment configuration data?
 
 
 
No idea, we aren't Beaker developers
 
 
 
 
** in what format?
 
** in what format?
 
* power cycle the DUT? '''Yes'''
 
* power cycle the DUT? '''Yes'''
 
* monitor the power usage during a run? '''No'''
 
* monitor the power usage during a run? '''No'''
* gather a kernel trace during a run? '''We have the functionality but it's not in use currently.'''
+
* gather a kernel trace during a run? '''Yes, part of Beaker'''
* claim other hardware resources or machines (other than the DUT) for use
+
* claim other hardware resources or machines (other than the DUT) for use during a test? '''Yes, test dependent'''
during a test? '''Yes, test dependent'''
 
 
* reserve a board for interactive use (ie remove it from automated testing)? '''Yes, configurable (currently disabled for CI runs)'''
 
* reserve a board for interactive use (ie remove it from automated testing)? '''Yes, configurable (currently disabled for CI runs)'''
 
* provide a web-based control interface for the lab? '''Yes'''
 
* provide a web-based control interface for the lab? '''Yes'''
Line 111: Line 102:
 
Does your test system:
 
Does your test system:
 
* store run artifacts '''Yes'''
 
* store run artifacts '''Yes'''
** in what format? '''XML/JSON, provided by Beaker'''
+
** in what format? '''XML/JSON job results, provided by Beaker. Test logs are in plaintext.'''
* put the run meta-data in a database?
+
* put the run meta-data in a database? '''Yes'''
 
+
** if so, which database? '''https://gitlab.com/cki-project/datawarehouse'''
'''Does GitLab's pipeline history count as a DB?'''
+
* parse the test logs for results? '''Only the results, not logs. The onboarded tests need to give clear results, otherwise we don't run them'''
 
+
* convert data from test logs into a unified format? '''No, but it is planned'''
** if so, which database?
 
* parse the test logs for results? '''Only the results, not logs. The onboarded tests need to give clear
 
results, otherwise we don't run them'''
 
* convert data from test logs into a unified format? '''No'''
 
 
** if so, what is the format?
 
** if so, what is the format?
* evaluate pass criteria for a test (e.g. ignored results, counts or
+
* evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)? '''Yes'''
thresholds)? '''Yes'''
 
 
* do you have a common set of result names: (e.g. pass, fail, skip, etc.) '''Yes'''
 
* do you have a common set of result names: (e.g. pass, fail, skip, etc.) '''Yes'''
 
** if so, what are they?
 
** if so, what are they?
'''Provided by Beaker: PASS, FAIL, WARN, SKIP, PANIC, NONE, NEW. These get
+
<b>Provided by Beaker: PASS, FAIL, WARN, SKIP, PANIC, NONE, NEW. These get
 
interpreted into binary pass/fail in the reports. We use a WAIVED status on
 
interpreted into binary pass/fail in the reports. We use a WAIVED status on
top of these for test results that can be ignored.'''
+
top of these for test results that can be ignored.</b>
 
* How is run data collected from the DUT?
 
* How is run data collected from the DUT?
 
** e.g. by pushing from the DUT, or pulling from a server? '''Pulled from Beaker'''
 
** e.g. by pushing from the DUT, or pulling from a server? '''Pulled from Beaker'''
Line 135: Line 121:
 
==== User interface ====
 
==== User interface ====
 
Does your test system:
 
Does your test system:
* have a visualization system? '''No proper dashboard yet but results can be queried in GitLab (only
+
* have a visualization system? '''https://gitlab.com/cki-project/datawarehouse'''
available internally right now)'''
+
* show build artifacts to users? '''Yes. Links are available in the reports and dashboard'''
* show build artifacts to users? '''Yes, links in reports and available in GitLab'''
+
* show run artifacts to users? '''Yes'''
* show run artifacts to users? '''No logs, only results. We need to deal with log sanitizing first'''
 
 
* do you have a common set of result colors? '''Partially'''
 
* do you have a common set of result colors? '''Partially'''
** if so, what are they? '''Red (failure) vs green (pass) for GitLab pipelines, 'X' vs check-mark
+
** if so, what are they? '''Red (failure) vs green (pass) for GitLab pipelines, 'X' vs check-mark emoji in reports'''
emoji in reports'''
 
 
* generate reports for test runs? '''Yes'''
 
* generate reports for test runs? '''Yes'''
* notify users of test results by e-mail? '''Yes'''
+
* notify users of test results by e-mail? '''Yes (if configured)'''
 
+
* can you query (aggregate and filter) the build meta-data? '''Yes (but limited)'''
* can you query (aggregate and filter) the build meta-data? '''Yes but limited (GitLab's pipeline jobs and reports)'''
+
* can you query (aggregate and filter) the run meta-data? '''Yes (but limited)'''
* can you query (aggregate and filter) the run meta-data? '''Yes but limited (GitLab's pipeline jobs, reports and Beaker results)'''
+
* what language or data format is used for online results presentation? '''Dashboards made with Django/Python'''
 
+
* what language or data format is used for reports? (e.g. PDF, excel, etc.) '''Plaintext emails, dashboards made with Django/Python'''
* what language or data format is used for online results presentation? (e.g.
 
HTML, Javascript, xml, etc.)
 
 
 
'''Whatever GitLab WEBUI uses since that's the only thing we have right now.'''
 
 
 
* what language or data format is used for reports? (e.g. PDF, excel, etc.) '''Plaintext emails'''
 
 
* does your test system have a CLI control tool? '''Yes, multiple different tools for different parts of the pipeline'''
 
* does your test system have a CLI control tool? '''Yes, multiple different tools for different parts of the pipeline'''
 
** what is it called?
 
** what is it called?
'''bkr (Beaker CLI), set of Python scripts to retrigger and query pipelines,
+
<b>
[https://github.com/cki-project/kpet kpet] for patch evaluation and test
+
* bkr (Beaker CLI)
picking, [https://github.com/cki-project/skt skt] for patch application,
+
* [https://gitlab.com/cki-project/pipeline-tools] -- set of Python scripts to retrigger and query pipelines,
kernel building and Beaker interaction... and I'm sure I'm forgetting some
+
* [https://gitlab.com/cki-project/pipeline-trigger] -- pipeline triggers,
other ones'''
+
* [https://gitlab.com/cki-project/kpet] for patch evaluation and test picking
 
+
* [https://gitlab.com/cki-project/kpet-db] -- CKI "database" for kpet tool
 +
* [https://gitlab.com/cki-project/skt] for Beaker interaction
 +
</b>
  
 
==== Languages: ====
 
==== Languages: ====
Examples: json, python, yaml, C, javascript, etc.
 
 
* what is the base language of your test framework core?
 
* what is the base language of your test framework core?
'''Beaker tests use beakerlib/restraint wrappers (Bash/Makefiles) but tests
+
<b>Beaker tests use restraint wrappers (Bash/Makefiles) but tests
themselves can be in any language. CKI itself uses Python and YAML the most,
+
themselves can be in any language. CKI itself uses Python and YAML with some Bash thrown in.</b>
with some Bash thrown in'''
 
  
What languages or data formats is the user required to learn?
+
What languages or data formats is the user required to learn? '''YAML'''
(as opposed to those used internally)
 
'''YAML'''
 
  
 
==== Can a user do the following with your test framework: ====
 
==== Can a user do the following with your test framework: ====
Line 182: Line 158:
  
 
* manually request that a test be executed (independent of a CI trigger)? '''No (unless they manually submit a Beaker job)'''
 
* manually request that a test be executed (independent of a CI trigger)? '''No (unless they manually submit a Beaker job)'''
* see the results of recent tests? '''Yes (filter GitLab, Beaker and reports). Limited functionality right now'''
+
* see the results of recent tests? '''Yes (filter GitLab, Beaker, reports from mailing lists and artifacts/logs). Limited functionality right now'''
* set the pass criteria for a test?
+
* set the pass criteria for a test? '''Yes, file a PR for test modification (granularity of arch/HW/distro/kernel version etc. is doable)'''
'''Yes, file a PR for test modification (granularity of arch/HW/distro/kernel
 
version etc. is doable)'''
 
 
** set the threshold value for a benchmark test? '''Yes, see above'''
 
** set the threshold value for a benchmark test? '''Yes, see above'''
 
** set the list of testcase results to ignore? '''Yes, but the set of ignored results is global for all CI runs right now'''
 
** set the list of testcase results to ignore? '''Yes, but the set of ignored results is global for all CI runs right now'''
 +
<b>While the configuration we currently use is global, it can be easily adjusted
 +
for the test to be ignored only on specific arches/kernel trees.</b>
 
* provide a rating for a test? (e.g. give it 4 stars out of 5) '''Yes, please reply to the report or file an issue for the test'''
 
* provide a rating for a test? (e.g. give it 4 stars out of 5) '''Yes, please reply to the report or file an issue for the test'''
* customize a test? '''Yes, file a PR with requested changes and reasoning, or fork a test and
+
* customize a test? '''Yes, file a PR with requested changes and reasoning, or fork a test and change the metadata for runs'''
change the metadata for runs'''
 
 
** alter the command line for the test program? '''Yes, see above'''
 
** alter the command line for the test program? '''Yes, see above'''
 
** alter the environment of the test program? '''Yes, see above'''
 
** alter the environment of the test program? '''Yes, see above'''
Line 196: Line 171:
 
** set a new expected value for a test? '''Yes, see above'''
 
** set a new expected value for a test? '''Yes, see above'''
 
** edit the test program source? '''Yes, see above'''
 
** edit the test program source? '''Yes, see above'''
* customize the notification criteria? '''Partially, we offer turning off "pass" emails if you are interested in
+
* customize the notification criteria? '''Partially, we offer turning off "pass" emails if you are interested in seeing only failures. Better custimization is WIP'''
seeing only failures'''
+
** customize the notification mechanism (eg. e-mail, text) '''No, only email notifications are available right now'''
** customize the notification mechanism (eg. e-mail, text) '''No, only email available right now'''
 
 
* generate a custom report for a set of runs? '''No'''
 
* generate a custom report for a set of runs? '''No'''
 
* save the report parameters to generate the same report in the future?
 
* save the report parameters to generate the same report in the future?
'''Yes, the metadata is saved in the pipeline so the same report can be
+
<b>Yes, the metadata is saved in the pipeline so the same report
recreated by calling the reporter on the same pipeline'''
+
can be recreated by calling the reporter on the same pipeline</b>
  
 
==== Requirements ====
 
==== Requirements ====
Line 208: Line 182:
 
* require minimum software on the DUT? '''Tree dependent, installed by Beaker before testing starts'''
 
* require minimum software on the DUT? '''Tree dependent, installed by Beaker before testing starts'''
 
* require minimum hardware on the DUT (e.g. memory) '''Test dependent, HW chosen based on test (and tree) requirements'''
 
* require minimum hardware on the DUT (e.g. memory) '''Test dependent, HW chosen based on test (and tree) requirements'''
** If so, what? (e.g. POSIX shell or some other interpreter, specific
+
** If so, what?
libraries, command line tools, etc.)
+
<b>Most of the toolchains and library versions are decided by the distro
'''Most of the toolchains and library versions are decided by the distro
 
 
that's used and that depends on what we are testing. Eg. we can't compile
 
that's used and that depends on what we are testing. Eg. we can't compile
upstream kernels with CentOS 7 toolchain'''
+
upstream kernels with CentOS 7 toolchain</b>
* require agent software on the DUT? (e.g. extra software besides production
+
* require agent software on the DUT? '''Yes'''
software) '''Yes'''
+
** If so, what agent? '''restraint (see [https://beaker-project.org/ Beaker website] for details)'''
** If so, what agent? '''restraint/beakerlib (see [https://beaker-project.org/ Beaker website] for
+
* is there optional agent software or libraries for the DUT? '''???'''
details)'''
 
* is there optional agent software or libraries for the DUT?
 
 
 
'''Not sure what this question means?'''
 
 
 
 
* require external hardware in your labs? '''No'''
 
* require external hardware in your labs? '''No'''
  
 
==== APIS ====
 
==== APIS ====
 
Does your test framework:
 
Does your test framework:
* use existing APIs or data formats to interact within itself, or with
+
* use existing APIs or data formats to interact within itself, or with 3rd-party modules? '''Yes'''
3rd-party modules? '''Yes'''
+
* have a published API for any of its sub-module interactions (any of the lines in the diagram)? '''Yes'''
* have a published API for any of its sub-module interactions (any of the
+
* Please provide a link or links to the APIs?
lines in the diagram)? '''Yes'''
 
** Please provide a link or links to the APIs?
 
 
** '''Python's GitLab API: [https://python-gitlab.readthedocs.io/en/stable/]'''
 
** '''Python's GitLab API: [https://python-gitlab.readthedocs.io/en/stable/]'''
 
** '''Beaker job submission XML: [https://beaker-project.org/docs/_downloads/beaker-job.rng]'''
 
** '''Beaker job submission XML: [https://beaker-project.org/docs/_downloads/beaker-job.rng]'''
Line 235: Line 201:
 
** '''Fedmsg receiver for COPR and Koji: [https://fedora-fedmsg.readthedocs.io/en/latest/topics.html]'''
 
** '''Fedmsg receiver for COPR and Koji: [https://fedora-fedmsg.readthedocs.io/en/latest/topics.html]'''
  
 
+
* What is the nature of the APIs you currently use? '''Does the above answer these questions too?'''
Sorry - this is kind of open-ended...
 
* What is the nature of the APIs you currently use?
 
Are they:
 
** RPCs?
 
** Unix-style? (command line invocation, while grabbing sub-tool output)
 
** compiled libraries?
 
** interpreter modules or libraries?
 
** web-based APIs?
 
** something else?
 
 
 
Does the above answer these questions too?
 
  
 
==== Relationship to other software: ====
 
==== Relationship to other software: ====
* what major components does your test framework use (e.g. Jenkins, Mondo DB,
+
* what major components does your test framework use? '''GitLab, Beaker, OpenShift'''
Squad, Lava, etc.)
+
* does your test framework interoperate with other test frameworks or software? '''Yes'''
'''GitLab, Beaker, UpShift'''
 
* does your test framework interoperate with other test frameworks or
 
software? '''Yes'''
 
 
** which ones? '''Any tests can be used if they are wrapped into Beaker task'''
 
** which ones? '''Any tests can be used if they are wrapped into Beaker task'''
  
Line 260: Line 212:
  
 
Please list your major components here:
 
Please list your major components here:
* UpShift containers -- actual pipeline core (GitLab runners), triggers, kernel compilation
+
* OpenShift containers -- actual pipeline core (GitLab runners), triggers, kernel compilation
* Beaker -- machine provisioning and testing
+
* [https://beaker-project.org/ Beaker] -- machine provisioning and testing
 
* GitLab -- The whole pipeline runs inside, contains all metadata and configuration
 
* GitLab -- The whole pipeline runs inside, contains all metadata and configuration
  
 
= Additional Data =
 
= Additional Data =
 +
Project web site:
 +
* https://cki-project.org
 +
 +
Presentation with introduction to CKI: [https://www.youtube.com/watch?v=9KwDWsAqivo Cookies for Kernel Developers] at DevConf CZ, January 2019
 +
 +
All source code:
 +
* https://gitlab.com/cki-project

Latest revision as of 20:27, 29 December 2020

CKI survey response

CKI survey response provided by Veronika Kabatova

Diagram

The diagram more or less works, there are just small differences in some of our pipelines (eg. we grab completed koji builds for testing Fedora kernels so in that case we don't touch the source nor build the kernel). But for generic purposes it works and it is indeed very similar to our pipelines for source testing.

Survey Questions

  • What is the name of your test framework? CKI Project (Continuous Kernel Integration)

Which of the aspects below of the CI loop does your test framework perform?

Does your test framework:

source code access

  • access source code repositories for the software under test? Yes (unless testing RPM builds)
  • access source code repositories for the test software? Yes
  • include the source for the test software? Yes
  • provide interfaces for developers to perform code reviews? No
  • detect that the software under test has a new version? Yes
    • if so, how? (e.g. polling a repository, a git hook, scanning a mail list, etc.) Polling git trees and Patchwork, listening on fedmsg bus
  • detect that the test software has a new version? Yes

test definitions

Does your test system:

  • have a test definition repository? Yes, multiple
    • if so, what data format or language is used (e.g. yaml, json, shell script)

For metadata YAML. Tests themselves can be any language if they are wrapped as Beaker tasks, which requires some XML snippets.

Does your test definition include:

  • source code (or source code location)? Yes (part of configuration)
  • dependency information? Yes (each test is responsible for handling its dependencies)
  • execution instructions? Yes (in each test's README)
  • command line variants? Yes (in each test's README)
  • environment variants? Yes
  • setup instructions? Yes (in each test's README)
  • cleanup instructions? No, each test is responsible to clean up after itself
    • if anything else, please describe:

Does your test system:

  • provide a set of existing tests? Yes
    • if so, how many? Currently about 120 tests open sourced

There are some more running internally. Some tests' reliability is a WIP. Some tests are architecture or tree specific, so a full test set is not executed for each run.

build management

Does your test system:

  • build the software under test (e.g. the kernel)? Yes, unless testing already build RPMs (eg. Fedora kernels built in koji and COPR)
  • build the test software? (e.g. dbench)? Yes, if needed
  • build other software (such as the distro, libraries, firmware)? No, everything is installed by Beaker from images and distro repositories
  • support cross-compilation? Yes
  • require a toolchain or build system for the SUT? Yes
  • require a toolchain or build system for the test software? Test-dependent, some need to be compiled and some not
  • come with pre-built toolchains?

Yes, RPMs/tarballs installed in Beaker before testing starts. Build containers with toolchains for kernel compilation are also available at [1]

  • store the build artifacts for generated software? Yes
    • in what format is the build metadata stored (e.g. json)? GitLab CI's variables and pipelines
    • are the build artifacts stored as raw files or in a database? Raw files (GitLab artifacts)
      • if a database, what database?

Test scheduling/management

Does your test system:

  • check that dependencies are met before a test is run? Yes, each test is responsible for listing its dependencies and aborts if they can't be installed
  • schedule the test for the DUT? Yes
    • select an appropriate individual DUT based on SUT or test attributes? Yes
    • reserve the DUT? Yes (for the test duration, but can be configured to extend the reservation)
    • release the DUT? Yes
  • install the software under test to the DUT? Yes
  • install required packages before a test is run? Yes
  • require particular bootloader on the DUT? (e.g. grub, uboot, etc.) No
  • deploy the test program to the DUT? Yes
  • prepare the test environment on the DUT? Yes
  • start a monitor (another process to collect data) on the DUT? Yes, part of Beaker's framework
  • start a monitor on external equipment? It's part of Beaker labs but we don't use it
  • initiate the test on the DUT? Yes
  • clean up the test environment on the DUT? Yes

Most of the above is handled by Beaker

DUT control

Does your test system:

  • store board configuration data? Yes, stored in Beaker
    • in what format? No idea, we aren't Beaker developers
  • store external equipment configuration data? No idea, we aren't Beaker developers
    • in what format?
  • power cycle the DUT? Yes
  • monitor the power usage during a run? No
  • gather a kernel trace during a run? Yes, part of Beaker
  • claim other hardware resources or machines (other than the DUT) for use during a test? Yes, test dependent
  • reserve a board for interactive use (ie remove it from automated testing)? Yes, configurable (currently disabled for CI runs)
  • provide a web-based control interface for the lab? Yes
  • provide a CLI control interface for the lab? Yes

Run artifact handling

Does your test system:

  • store run artifacts Yes
    • in what format? XML/JSON job results, provided by Beaker. Test logs are in plaintext.
  • put the run meta-data in a database? Yes
  • parse the test logs for results? Only the results, not logs. The onboarded tests need to give clear results, otherwise we don't run them
  • convert data from test logs into a unified format? No, but it is planned
    • if so, what is the format?
  • evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)? Yes
  • do you have a common set of result names: (e.g. pass, fail, skip, etc.) Yes
    • if so, what are they?

Provided by Beaker: PASS, FAIL, WARN, SKIP, PANIC, NONE, NEW. These get interpreted into binary pass/fail in the reports. We use a WAIVED status on top of these for test results that can be ignored.

  • How is run data collected from the DUT?
    • e.g. by pushing from the DUT, or pulling from a server? Pulled from Beaker
  • How is run data collected from external equipment? N/A
  • Is external equipment data parsed? N/A

User interface

Does your test system:

  • have a visualization system? https://gitlab.com/cki-project/datawarehouse
  • show build artifacts to users? Yes. Links are available in the reports and dashboard
  • show run artifacts to users? Yes
  • do you have a common set of result colors? Partially
    • if so, what are they? Red (failure) vs green (pass) for GitLab pipelines, 'X' vs check-mark emoji in reports
  • generate reports for test runs? Yes
  • notify users of test results by e-mail? Yes (if configured)
  • can you query (aggregate and filter) the build meta-data? Yes (but limited)
  • can you query (aggregate and filter) the run meta-data? Yes (but limited)
  • what language or data format is used for online results presentation? Dashboards made with Django/Python
  • what language or data format is used for reports? (e.g. PDF, excel, etc.) Plaintext emails, dashboards made with Django/Python
  • does your test system have a CLI control tool? Yes, multiple different tools for different parts of the pipeline
    • what is it called?

  • bkr (Beaker CLI)
  • [2] -- set of Python scripts to retrigger and query pipelines,
  • [3] -- pipeline triggers,
  • [4] for patch evaluation and test picking
  • [5] -- CKI "database" for kpet tool
  • [6] for Beaker interaction

Languages:

  • what is the base language of your test framework core?

Beaker tests use restraint wrappers (Bash/Makefiles) but tests themselves can be in any language. CKI itself uses Python and YAML with some Bash thrown in.

What languages or data formats is the user required to learn? YAML

Can a user do the following with your test framework:

Please note that we are heavily relying on internal labs and thus all potential users and their configuration need to go through us first, as the internal infrastructure is not available from public networks.

  • manually request that a test be executed (independent of a CI trigger)? No (unless they manually submit a Beaker job)
  • see the results of recent tests? Yes (filter GitLab, Beaker, reports from mailing lists and artifacts/logs). Limited functionality right now
  • set the pass criteria for a test? Yes, file a PR for test modification (granularity of arch/HW/distro/kernel version etc. is doable)
    • set the threshold value for a benchmark test? Yes, see above
    • set the list of testcase results to ignore? Yes, but the set of ignored results is global for all CI runs right now

While the configuration we currently use is global, it can be easily adjusted for the test to be ignored only on specific arches/kernel trees.

  • provide a rating for a test? (e.g. give it 4 stars out of 5) Yes, please reply to the report or file an issue for the test
  • customize a test? Yes, file a PR with requested changes and reasoning, or fork a test and change the metadata for runs
    • alter the command line for the test program? Yes, see above
    • alter the environment of the test program? Yes, see above
    • specify to skip a testcase? Yes, see above
    • set a new expected value for a test? Yes, see above
    • edit the test program source? Yes, see above
  • customize the notification criteria? Partially, we offer turning off "pass" emails if you are interested in seeing only failures. Better custimization is WIP
    • customize the notification mechanism (eg. e-mail, text) No, only email notifications are available right now
  • generate a custom report for a set of runs? No
  • save the report parameters to generate the same report in the future?

Yes, the metadata is saved in the pipeline so the same report can be recreated by calling the reporter on the same pipeline

Requirements

Does your test framework:

  • require minimum software on the DUT? Tree dependent, installed by Beaker before testing starts
  • require minimum hardware on the DUT (e.g. memory) Test dependent, HW chosen based on test (and tree) requirements
    • If so, what?

Most of the toolchains and library versions are decided by the distro that's used and that depends on what we are testing. Eg. we can't compile upstream kernels with CentOS 7 toolchain

  • require agent software on the DUT? Yes
  • is there optional agent software or libraries for the DUT? ???
  • require external hardware in your labs? No

APIS

Does your test framework:

  • use existing APIs or data formats to interact within itself, or with 3rd-party modules? Yes
  • have a published API for any of its sub-module interactions (any of the lines in the diagram)? Yes
  • Please provide a link or links to the APIs?
    • Python's GitLab API: [7]
    • Beaker job submission XML: [8]
    • Patchwork v2 REST API: [9]
    • Fedmsg receiver for COPR and Koji: [10]
  • What is the nature of the APIs you currently use? Does the above answer these questions too?

Relationship to other software:

  • what major components does your test framework use? GitLab, Beaker, OpenShift
  • does your test framework interoperate with other test frameworks or software? Yes
    • which ones? Any tests can be used if they are wrapped into Beaker task

Overview

Please list the major components of your test system.

Please list your major components here:

  • OpenShift containers -- actual pipeline core (GitLab runners), triggers, kernel compilation
  • Beaker -- machine provisioning and testing
  • GitLab -- The whole pipeline runs inside, contains all metadata and configuration

Additional Data

Project web site:

Presentation with introduction to CKI: Cookies for Kernel Developers at DevConf CZ, January 2019

All source code: