CKI survey response

From eLinux.org
Revision as of 10:29, 7 August 2019 by Tim Bird (talk | contribs) (DUT control)
Jump to: navigation, search

CKI survey response

CKI survey response provided by Veronika Kabatova

Diagram

The diagram more or less works, there are just small differences in some of our pipelines (eg. we grab completed koji builds for testing Fedora kernels so in that case we don't touch the source nor build the kernel). But for generic purposes it works and it is indeed very similar to our pipelines for source testing.

Survey Questions

  • What is the name of your test framework? CKI Project (Continuous Kernel Integration)

Which of the aspects below of the CI loop does your test framework perform?

Does your test framework:

source code access

  • access source code repositories for the software under test? Yes (unless testing RPM builds)
  • access source code repositories for the test software? Yes
  • include the source for the test software? Yes
  • provide interfaces for developers to perform code reviews? No
  • detect that the software under test has a new version? Yes
    • if so, how? (e.g. polling a repository, a git hook, scanning a mail list, etc.) Polling git trees and Patchwork, listening on fedmsg bus
  • detect that the test software has a new version? Yes

test definitions

Does your test system:

  • have a test definition repository? Yes, multiple
    • if so, what data format or language is used (e.g. yaml, json, shell script)

For metadata YAML. Tests themselves can be any language if they are wrapped as Beaker tasks, which requires some XML snippets.

Does your test definition include:

  • source code (or source code location)? Yes (part of configuration)
  • dependency information? Yes (each test is responsible for handling its dependencies)
  • execution instructions? Yes (in each test's README)
  • command line variants? Yes (in each test's README)
  • environment variants? Yes
  • setup instructions? Yes (in each test's README)
  • cleanup instructions? No, each test is responsible to clean up after itself
    • if anything else, please describe:

Does your test system:

  • provide a set of existing tests? Yes
    • if so, how many? Currently 24 tests open sourced but there are some more running internally. Some tests' reliability is a WIP.

build management

Does your test system:

  • build the software under test (e.g. the kernel)? Yes, unless testing already build RPMs (eg. Fedora kernels built in koji and COPR)
  • build the test software? (e.g. dbench)? Yes, if needed
  • build other software (such as the distro, libraries, firmware)? No, everything is installed by Beaker from

images and distro repositories

  • support cross-compilation? Yes
  • require a toolchain or build system for the SUT? Yes
  • require a toolchain or build system for the test software? Test-dependent, some need to be compiled and some not
  • come with pre-built toolchains?

Yes, RPMs installed in Beaker before testing starts. Build containers with toolchains for kernel compilation are also available at [1]

  • store the build artifacts for generated software? Yes
    • in what format is the build metadata stored (e.g. json)? GitLab CI's variables and pipelines
    • are the build artifacts stored as raw files or in a database? Raw files (GitLab artifacts)
      • if a database, what database?

Test scheduling/management

Does your test system:

  • check that dependencies are met before a test is run? Yes, each test is responsible for listing its dependencies and aborts

if they can't be installed

  • schedule the test for the DUT? Yes
    • select an appropriate individual DUT based on SUT or test attributes? Yes
    • reserve the DUT? Yes (for the test duration, but can be configured to extend the

reservation)

    • release the DUT? Yes
  • install the software under test to the DUT? Yes
  • install required packages before a test is run? Yes
  • require particular bootloader on the DUT? (e.g. grub, uboot, etc.) No
  • deploy the test program to the DUT? Yes
  • prepare the test environment on the DUT? Yes
  • start a monitor (another process to collect data) on the DUT? Yes, part of Beaker's framework
  • start a monitor on external equipment? It's part of Beaker labs but we don't use it
  • initiate the test on the DUT? Yes
  • clean up the test environment on the DUT? Yes

Most of the above is handled by Beaker


DUT control

Does your test system:

  • store board configuration data? Yes, stored in Beaker
    • in what format? No idea, we aren't Beaker developers
  • store external equipment configuration data? No idea, we aren't Beaker developers
    • in what format?
  • power cycle the DUT? Yes
  • monitor the power usage during a run? No
  • gather a kernel trace during a run? We have the functionality but it's not in use currently.
  • claim other hardware resources or machines (other than the DUT) for use

during a test? Yes, test dependent

  • reserve a board for interactive use (ie remove it from automated testing)? Yes, configurable (currently disabled for CI runs)
  • provide a web-based control interface for the lab? Yes
  • provide a CLI control interface for the lab? Yes

Run artifact handling

Does your test system:

  • store run artifacts Yes
    • in what format? XML/JSON, provided by Beaker
  • put the run meta-data in a database?

Does GitLab's pipeline history count as a DB?

    • if so, which database?
  • parse the test logs for results? Only the results, not logs. The onboarded tests need to give clear

results, otherwise we don't run them

  • convert data from test logs into a unified format? No
    • if so, what is the format?
  • evaluate pass criteria for a test (e.g. ignored results, counts or

thresholds)? Yes

  • do you have a common set of result names: (e.g. pass, fail, skip, etc.) Yes
    • if so, what are they?

Provided by Beaker: PASS, FAIL, WARN, SKIP, PANIC, NONE, NEW. These get interpreted into binary pass/fail in the reports. We use a WAIVED status on top of these for test results that can be ignored.

  • How is run data collected from the DUT?
    • e.g. by pushing from the DUT, or pulling from a server? Pulled from Beaker
  • How is run data collected from external equipment? N/A
  • Is external equipment data parsed? N/A

User interface

Does your test system:

  • have a visualization system? No proper dashboard yet but results can be queried in GitLab (only

available internally right now)

  • show build artifacts to users? Yes, links in reports and available in GitLab
  • show run artifacts to users? No logs, only results. We need to deal with log sanitizing first
  • do you have a common set of result colors? Partially
    • if so, what are they? Red (failure) vs green (pass) for GitLab pipelines, 'X' vs check-mark

emoji in reports

  • generate reports for test runs? Yes
  • notify users of test results by e-mail? Yes
  • can you query (aggregate and filter) the build meta-data? Yes but limited (GitLab's pipeline jobs and reports)
  • can you query (aggregate and filter) the run meta-data? Yes but limited (GitLab's pipeline jobs, reports and Beaker results)
  • what language or data format is used for online results presentation? (e.g.

HTML, Javascript, xml, etc.)

Whatever GitLab WEBUI uses since that's the only thing we have right now.

  • what language or data format is used for reports? (e.g. PDF, excel, etc.) Plaintext emails
  • does your test system have a CLI control tool? Yes, multiple different tools for different parts of the pipeline
    • what is it called?

bkr (Beaker CLI), set of Python scripts to retrigger and query pipelines, kpet for patch evaluation and test picking, skt for patch application, kernel building and Beaker interaction... and I'm sure I'm forgetting some other ones


Languages:

Examples: json, python, yaml, C, javascript, etc.

  • what is the base language of your test framework core?

Beaker tests use beakerlib/restraint wrappers (Bash/Makefiles) but tests themselves can be in any language. CKI itself uses Python and YAML the most, with some Bash thrown in

What languages or data formats is the user required to learn? (as opposed to those used internally) YAML

Can a user do the following with your test framework:

Please note that we are heavily relying on internal labs and thus all potential users and their configuration need to go through us first, as the internal infrastructure is not available from public networks.

  • manually request that a test be executed (independent of a CI trigger)? No (unless they manually submit a Beaker job)
  • see the results of recent tests? Yes (filter GitLab, Beaker and reports). Limited functionality right now
  • set the pass criteria for a test?

Yes, file a PR for test modification (granularity of arch/HW/distro/kernel version etc. is doable)

    • set the threshold value for a benchmark test? Yes, see above
    • set the list of testcase results to ignore? Yes, but the set of ignored results is global for all CI runs right now
  • provide a rating for a test? (e.g. give it 4 stars out of 5) Yes, please reply to the report or file an issue for the test
  • customize a test? Yes, file a PR with requested changes and reasoning, or fork a test and

change the metadata for runs

    • alter the command line for the test program? Yes, see above
    • alter the environment of the test program? Yes, see above
    • specify to skip a testcase? Yes, see above
    • set a new expected value for a test? Yes, see above
    • edit the test program source? Yes, see above
  • customize the notification criteria? Partially, we offer turning off "pass" emails if you are interested in

seeing only failures

    • customize the notification mechanism (eg. e-mail, text) No, only email available right now
  • generate a custom report for a set of runs? No
  • save the report parameters to generate the same report in the future?

Yes, the metadata is saved in the pipeline so the same report can be recreated by calling the reporter on the same pipeline

Requirements

Does your test framework:

  • require minimum software on the DUT? Tree dependent, installed by Beaker before testing starts
  • require minimum hardware on the DUT (e.g. memory) Test dependent, HW chosen based on test (and tree) requirements
    • If so, what? (e.g. POSIX shell or some other interpreter, specific

libraries, command line tools, etc.) Most of the toolchains and library versions are decided by the distro that's used and that depends on what we are testing. Eg. we can't compile upstream kernels with CentOS 7 toolchain

  • require agent software on the DUT? (e.g. extra software besides production

software) Yes

details)

  • is there optional agent software or libraries for the DUT?

Not sure what this question means?

  • require external hardware in your labs? No

APIS

Does your test framework:

  • use existing APIs or data formats to interact within itself, or with

3rd-party modules? Yes

  • have a published API for any of its sub-module interactions (any of the

lines in the diagram)? Yes

    • Please provide a link or links to the APIs?
    • Python's GitLab API: [2]
    • Beaker job submission XML: [3]
    • Patchwork v2 REST API: [4]
    • Fedmsg receiver for COPR and Koji: [5]


Sorry - this is kind of open-ended...

  • What is the nature of the APIs you currently use?

Are they:

    • RPCs?
    • Unix-style? (command line invocation, while grabbing sub-tool output)
    • compiled libraries?
    • interpreter modules or libraries?
    • web-based APIs?
    • something else?

Does the above answer these questions too?

Relationship to other software:

  • what major components does your test framework use (e.g. Jenkins, Mondo DB,

Squad, Lava, etc.) GitLab, Beaker, UpShift

  • does your test framework interoperate with other test frameworks or

software? Yes

    • which ones? Any tests can be used if they are wrapped into Beaker task

Overview

Please list the major components of your test system.

Please list your major components here:

  • UpShift containers -- actual pipeline core (GitLab runners), triggers, kernel compilation
  • Beaker -- machine provisioning and testing
  • GitLab -- The whole pipeline runs inside, contains all metadata and configuration

Additional Data