KernelCI survey response

From eLinux.org
Jump to: navigation, search

KernelCI survey response

KernelCI survey response provided by Kevin Hilman

Note that kernelCI relies heavily on LAVA for several aspects, so I tried not to duplicate LAVA responses.

Survey Questions

  • What is the name of your test framework? kernelCI

Does your test framework:

source code access

  • access source code repositories for the software under test?
  • access source code repositories for the test software? yes
  • include the source for the test software? yes
  • provide interfaces for developers to perform code reviews? yes
  • detect that the software under test has a new version? yes
    • if so, how? git polling
  • detect that the test software has a new version? yes

test definitions

Does your test system:

  • have a test definition repository? yes
    • if so, what data format or language is used? YAML format (defined by LAVA)

Does your test definition include:

  • source code (or source code location)?
  • dependency information?
  • execution instructions?
  • command line variants?
  • environment variants?
  • setup instructions?
  • cleanup instructions?
    • if anything else, please describe:

kernelCI uses LAVA for test definitions and running tests, so please refer to LAVA responses.

Does your test system:

  • provide a set of existing tests?
    • if so, how many? kernelCI doesn't have its own tests, but instead is meant to run existing testsuites. In addition to suites already available via LAVA, we're running kselftest, DRM, V4L2 suites.

build management

Does your test system:

  • build the software under test (e.g. the kernel)? yes
  • build the test software? yes
  • build other software (such as the distro, libraries, firmware)? yes
  • support cross-compilation? 'yes
  • require a toolchain or build system for the SUT? yes
  • require a toolchain or build system for the test software? no
  • come with pre-built toolchains? yes
  • store the build artifacts for generated software? yes
    • in what format is the build metadata stored (e.g. json)? JSON, stored in mongodb
    • are the build artifacts stored as raw files or in a database? raw files
      • if a database, what database?

Test scheduling/management

Does your test system:

  • check that dependencies are met before a test is run? yes
  • schedule the test for the DUT? yes
    • select an appropriate individual DUT based on SUT or test attributes? yes
    • reserve the DUT?
    • release the DUT?
  • install the software under test to the DUT?
  • install required packages before a test is run?
  • require particular bootloader on the DUT? (e.g. grub, uboot, etc.)
  • deploy the test program to the DUT?
  • prepare the test environment on the DUT?
  • start a monitor (another process to collect data) on the DUT?
  • start a monitor on external equipment?
  • initiate the test on the DUT?
  • clean up the test environment on the DUT?

kernelCI uses LAVA for all of this.

DUT control

Does your test system:

  • store board configuration data?
    • in what format?
  • store external equipment configuration data?
    • in what format?
  • power cycle the DUT?
  • monitor the power usage during a run?
  • gather a kernel trace during a run?
  • claim other hardware resources or machines (other than the DUT) for use during a test?
  • reserve a board for interactive use (ie remove it from automated testing)?
  • provide a web-based control interface for the lab?
  • provide a CLI control interface for the lab?

kernelCI uses LAVA for all of this.

Run artifact handling

Does your test system:

  • store run artifacts yes
    • in what format? raw files and metadata in JSON
  • put the run meta-data in a database? yes
    • if so, which database? mongodb
  • parse the test logs for results? yes
  • convert data from test logs into a unified format? no
    • if so, what is the format?
  • evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)? yes
  • do you have a common set of result names: (e.g. pass, fail, skip, etc.) yes
    • if so, what are they? pass, fail, skip, unknown
  • How is run data collected from the DUT?
    • e.g. by pushing from the DUT, or pulling from a server?

kernelCI uses LAVA. DUT can push, or other services can pull from LAVA dispatcher.

  • How is run data collected from external equipment? raw logs
  • Is external equipment data parsed? no, not fully automated

User interface

Does your test system:

  • have a visualization system? yes
  • show build artifacts to users? yes
  • show run artifacts to users? yes
  • do you have a common set of result colors? yes
    • if so, what are they? pass: green, fail: red, skip: yellow
  • generate reports for test runs? yes
  • notify users of test results by e-mail? yes
  • can you query (aggregate and filter) the build meta-data? yes
  • can you query (aggregate and filter) the run meta-data? yes
  • what language or data format is used for online results presentation? custom html/javascript app
  • what language or data format is used for reports? e-mail (plain text and html)
  • does your test system have a CLI control tool? yes
    • what is it called? no name, collection of python tools for various tasks"

Languages:

Examples: json, python, yaml, C, javascript, etc.

  • what is the base language of your test framework core?
    • kernelCI uses LAVA, written in python, and most other tooling/scripting also written in python.
    • Web frontend is python/flask + javascript.
  • What languages or data formats is the user required to learn? python


Can a user do the following with your test framework:

  • Can a user manually request that a test be executed? no
  • see the results of recent tests? yes
  • set the pass criteria for a test? yes
    • set the threshold value for a benchmark test? no
    • set the list of testcase results to ignore? no
  • provide a rating for a test? (e.g. give it 4 stars out of 5) no
  • customize a test? 'yes
    • alter the command line for the test program? no
    • alter the environment of the test program? no
    • specify to skip a testcase? no
    • set a new expected value for a test? no
    • edit the test program source? no
  • customize the notification criteria? no
    • customize the notification mechanism (eg. e-mail, text) no
  • generate a custom report for a set of runs? no
  • save the report parameters to generate the same report in the future? no

Requirements

Does your test framework:

  • require minimum software on the DUT? yes, netboot-capable bootloader
  • require minimum hardware on the DUT (e.g. memory) yes
    • If so, what? ~8M for kernel + minimal ramdisk
  • require agent software on the DUT? no'
    • If so, what agent?
  • is there optional agent software or libraries for the DUT? no
  • require external hardware in your labs? yes, power-switching

APIS

Does your test framework:

  • use existing APIs or data formats to interact within itself, or with 3rd-party modules? LAVA
  • have a published API for any of its sub-module interactions (any of the lines in the diagram)? yes
    • Please provide a link or links to the APIs? [1]
  • What is the nature of the APIs you currently use?

Are they:

    • RPCs? LAVA control via XML-RPC
    • Unix-style? lots of cmdline tooling/scripting
    • compiled libraries?
    • interpreter modules or libraries?
    • web-based APIs? REST API for interaction with backend/storage
    • something else?

Relationship to other software:

  • what major components does your test framework use? Jenkins, LAVA, mongodb
  • does your test framework interoperate with other test frameworks or software? yes
    • which ones? LAVA

Overview

Please list your major components here:

  • Build/Test Management (Jenkins triggers, python tooling, git repos)
  • Test Scheduling (LAVA)
  • DUT Control (LAVA)
  • Results Management (kernelci-backend)
  • View/Interact (kernelci-frontend)

Additional Data