Test definition survey

From eLinux.org
Jump to: navigation, search

Here is a list of test definition fields, attributes, file formats, operations, instructions, functions, etc. (I won't know what even what they consist of until I see them).

This is the object in your test system that "defines" what a test is. It likely has meta-data about the program to run, how to get the program started, maybe what things are required for the program to run, how the results should be interpreted, etc.

Test definition data survey

Each sub-section below has data about test definitions from a different system.

For many, there are links to the test definitions for that system: One simple, one characteristic, and link to repository containing lots of them.

Also, each sub-section has data that describes the files, fields and usage of the different fields in that system.

Fuego

fuego files

  • fuego_test.sh
  • spec.json
  • parser.py - has the testlog parser for this test
  • criteria.json - has the pass criteria for this test
  • test.yaml - has meta-data for this test
  • chart_config.json - has charting configuration
  • reference.json - has units for test results
  • docs - directory of test and testcase documentation
  • (test program source) - tarball, or repository reference for test program source
  • (patches against test program source) - changes for test program source

jenkins files

  • config.xml (job file) - Jenkins job description for a test ((board, spec, test) combination)

fields

  • config.xml::actions
  • config.xml::descriptions
  • config.xml::keepDependencies
  • config.xml::scm
  • config.xml::assignedNode - tag for which board or set of boards can run this job
  • config.xml::canRoam
  • config.xml::disabled
  • config.xml::blockBuildWhenDownstreamBuilding
  • config.xml::blockBuildWhenUpstreamBuilding
  • config.xml::triggers
  • config.xml::concurrentBuild
  • config.xml::customWorkspace
  • config.xml::builders
  • config.xml::hudson.tasks.Shell:command - Fuego command to run (includes board, spec, timeout, flags, and test)
  • config.xml::publishers
  • config.xml::flotile.FlotPlublisher
  • config.xml::hudson.plugins.descriptionSetter.DescriptionSetterPublisher(:regexp,:regexpForFailed,:description,:descriptionForFailed,:setForMatrix)
  • config.xml::buildWrappers
  • fuego_test.sh::NEED_* - 'need' variables for declarative dependency checks
  • fuego_test.sh::tarball - program source reference (can be local tarball or remote tarball, or url?)
  • fuego_test.sh::test_pre_check - (optional) shell function to test dependencies and pre-conditions
  • fuego_test.sh::test_build - shell function to build test program source
  • fuego_test.sh::test_deploy - shell function to put test program on target board
  • fuego_test.sh::test_run - shell function to run test program on the target board
  • fuego_test.sh::test_snapshot - (optional) shell function to gather machine status
  • fuego_test.sh::test_fetch_results - (optional) shell function to gather results and logs from target board
  • fuego_test.sh::test_processing - shell function to determine result
  • spec.json::testName - name of the test
  • spec.json::specs - list of test specs (variants)
  • spec.json::specs[<specname>].xxx - arbitrary test variables for the indicated test spec
  • spec.json::specs[<specname>].skiplist - list of testcases to skip
  • spec.json::specs[<specname>].extra_success_links - links for Jenkins display on test success
  • spec.json::specs[<specname>].extra_fail_links - links for Jenkins display on test failure
  • parser.py - python code to parse the testlog from (test_run::report(), report_live(), and log_this()) calls
  • reference.json::test_sets::name
  • reference.json::test_sets::test_cases - list of test cases in this test_set
  • reference.json::test_sets::test_cases::name
  • reference.json::test_sets::test_cases::measurements - list of measurements in this test case
  • reference.json::test_sets::test_cases::measurements::name
  • reference.json::test_sets::test_cases::measurements::unit
  • criteria.json::schema_version
  • criteria.json::criteria - list of results pass criteria
  • criteria.json::criteria::tguid - test globally unique identifier for this criteria
  • criteria.json::criteria::reference - reference condition for this criteria
  • criteria.json::criteria::reference::value - reference value(s) for this criteria
  • criteria.json::criteria::reference::operator - operator for this condition (eq, le, lt, ge, gt, bt, ne)
  • criteria.json::criteria::min_pass
  • criteria.json::criteria::max_fail
  • criteria.json::criteria::fail_ok_list
  • criteria.json::criteria::must_pass_list
  • test.yaml::fuego_package_version - indicates the version of package (in case of changes to the package schema). For now, this is always 1.
  • test.yaml::name - has the full Fuego name of the test. Ex: Benchmark.iperf
  • test.yaml::description - has an English description of the test
  • test.yaml::license - has an SPDX identifier for the test.
  • test.yaml::author -the author or authors of the base test
  • test.yaml::maintainer - the maintainer of the Fuego materials for this test
  • test.yaml::version - the version of the base test
  • test.yaml::fuego_release - the version of Fuego materials for this test. This is a monotonically incrementing integer, starting at 1 for each new version of the base test.
  • test.yaml::type - either Benchmark or Functional
  • test.yaml::tags - a list of tags used to categorize this test. This is intended to be used in an eventual online test store.
  • test.yaml::tarball_src - a URL where the tarball was originally obtained from
  • test.yaml::gitrepo - a git URL where the source may be obtained from
  • test.yaml::host_dependencies - a list of Debian package names that must be installed in the docker container in order for this test to work properly. This field is optional, and indicates packages needed that are beyond those included in the standard Fuego host distribution in the Fuego docker container.
  • test.yaml::params - a list of test variables that may be used with this test, including their descriptions, whether they are optional or required, and an example value for each one
  • test.yaml::data_files - a list of the files that are included in this test. This is used as the manifest for packaging the test.

Example

Here is an example test.yaml file, for the package Benchmark.iperf3:

fuego_package_version: 1
name: Benchmark.iperf3
description: |
    iPerf3 is a tool for active measurements of the maximum achievable
    bandwidth on IP networks.
license: BSD-3-Clause.
author: |
    Jon Dugan, Seth Elliott, Bruce A. Mah, Jeff Poskanzer, Kaustubh Prabhu,
    Mark Ashley, Aaron Brown, Aeneas Jaißle, Susant Sahani, Bruce Simpson,
    Brian Tierney.
maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
version: 3.1.3
fuego_release: 1
type: Benchmark
tags: ['network', 'performance']
tarball_src: https://iperf.fr/download/source/iperf-3.1.3-source.tar.gz
gitrepo: https://github.com/esnet/iperf.git
params:
    - server_ip:
        description: |
            IP address of the server machine. If not provided, then SRV_IP
            _must_ be provided on the board file. Otherwise the test will fail.
            if the server ip is assigned to the host, the test automatically
            starts the iperf3 server daemon. Otherwise, the tester _must_ make
            sure that iperf3 -V -s -D is already running on the server machine.
        example: 192.168.1.45
        optional: yes
    - client_params:
        description: extra parameters for the client
        example: -p 5223 -u -b 10G
        optional: yes
data_files:
    - chart_config.json
    - fuego_test.sh
    - parser.py
    - spec.json
    - criteria.json
    - iperf-3.1.3-source.tar.gz
    - reference.json
    - test.yaml

LAVA

For the 'files' part, each test in test-definitions is stored in a separate directory. The directory has to contain at least the YAML file that is compliant with LAVA test definition. We have a sanity check script (validate.py) that is executed on any pull request. This ensures all files pushed to the repository are compliant. Usual practice is that test directory contains a test script (shell script). The test script is responsible for installing dependencies, running tests and parsing results. There is no mandatory format there but test-definitions provides a library of functions that help writing test scripts. There are libraries for 'linux' and 'android'. We also host directory for manual tests and simple executor for them but in the context of automated testing these are irrelevant.

files

  • <tesname>.sh - is the thing that will run on the target?
  • <testname>.yaml - describes test properties

fields

  • busybox.sh - shell code to execute things on target
  • testname.yaml::metadata::format - format of this yaml test definition file
  • testname.yaml::metadata::name - name of this test
  • testname.yaml::metadata::description - description of the test
  • testname.yaml::metadata::maintainer - list of email addresses of test maintainer(s)
  • testname.yaml::metadata::os - list of Linux distributions where this test can run
  • testname.yaml::metadata::scope - list of strings: can include: 'functional', 'performance', 'preemp-rt'
    • looks like set of tags?
    • testname.yaml::environment - list of something: lava-test-shell?
  • testname.yaml::metadata::devices - list of device types (board names) where this test can run
  • testname.yaml::PARAMS:: list of arbitrary test variables
  • testname.yaml::run - items for test execution
  • testname.yaml::run::steps - shell lines to execute the test (executed on target board)
  • libhugetlbfs.sh::VERSION, WORD_SIZE - test variables
  • libhugetlbfs.sh::libhugetlbfs_build_test - shell function to build libhugetlbfs software
  • libhugetlbfs.sh::libhugetlbfs_run_test - run run_tests.py, using variables, and call parse_output
  • libhugetlbfs.sh::libhugetlbfs_setup - prepare for test (mostly mount stuff)
  • libhugetlbfs.sh::libhugetlbfs_cleanup - unmount stuff and remove stuff
  • libhugetlbfs.sh::parser_output - parse output - generate result file in a specific format (one line per testcase?)
  • libhugetlbfs.sh::install - install pre-requisite packages
  • libhugetlbfs.sh::inline -- inline code to:
    • create output directory
    • check kernel config for required settings
    • call to install packages (pre-requisites)
    • setup
    • call to build_test
    • call to run_test
    • cleanup

Yocto Project

An "on target" test of the compiler:

(same directory has simple python/perl tests and so on)

http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/lib/oeqa/files (for the test files for context, they're just hello world examples)

This is a "selftest" for the "devtool" command that is part of the overall build system, its a bit more complex with shared functions and tests for each of devtool's subcommands.

This has all the test code and core test definitions. Test definitions are in cases directories under "manual", "runtime", "sdk" and "selftest" directories.

There's an article about Yocto ptest testing, available at: https://lwn.net/Articles/788626/

Here's a link to a wiki page about ptest: https://wiki.yoctoproject.org/wiki/Ptest


files

  • testname.py: python module with test code

fields

  • testname.py::setup
  • testname.py::teardown
  • gcc.py::OETestID - indicates numeric testcase id
  • gcc.py::OETestDepends -
  • gcc.py::OEHasPackage -
  • gcc.py::test_gcc_compile
  • gcc.py::test_gpp_compile
  • gcc.py::test_gpp2_compile
  • gcc.py::test_make

0-day

files

  • tests/iozone
  • jobs/iozone.yaml
  • pack/iozone
  • stats/iozone
  • pkg/iozone/PKGBUILD
  • pkg/iozone/iozone.install
  • pkg/iozone/iozone3_434.tar.sig

fields

  • pack/iozone::filename
  • pack/iozone::WEB_URL
  • pack/iozone::download - shell function to download the source
  • pack/iozone::build - shell function to build the source (for linux-AMD64)
  • pack/iozone::install - shell function to install binaries
  • pack/iozone::pack - shell function to create cpio file
  • test/iozone - shell script to execute iozone for each mount point
  • stats/iozone - ruby scrip to extra data from stdin and write aggregate data to stdout
  • PKGBUILD::validpgpkeys
  • PKGBUILD::pkgname
  • PKGBUILD::pkgver
  • PKGBUILD::pkgrel
  • PKGBUILD::pkgdesc
  • PKGBUILD::arch - tuple of supported architectures
  • PKGBUILD::url
  • PKGBUILD::license
  • PKGBUILD::depends
  • PKGBUILD::optdependes
  • PKGBUILD::install
  • PKGBUILD::source - indicate source url and signature
  • PKGBUILD::sha512sums
  • PKGBUILD::build - shell function to build linux
  • PKGBUILD::package - shell function to install files
  • jobs/iozone.yaml::suite
  • jobs/iozone.yaml::testcase
  • jobs/iozone.yaml::category
  • jobs/iozone.yaml::disk
  • jobs/iozone.yaml::fs - list of filesystems to test?
  • jobs/iozone.yaml::iosched - list of ioschedulers to test?


  • iozone.install::post_install
  • iozone.install::post_upgrade

CKI

Note that CKI is built on top of 'beaker', which is a RedHat project that performs board management and provisioning services, and actually invokes the tests.

See more about the beaker meta-data here: https://beaker-project.org/docs/user-guide/task-metadata.html

CKI also has a "Test Suite Data" file, called index.yaml, that has some of the other meta-data associated with a test. This is in a repository that is not yet published (as of October 2019).

But there examples of files from this repository (that have had information redacted for business reasons).

There is also a breakdown of the areas of this file in a presentation by CKI developers. See RedHat Joins the CI party, brings Cookies (PDF)

files

  • makefile
  • README.md
  • runtest.sh

Each makefile has targets: run, build, clean

Regarding the runtest.sh file, Veronika writes: You don't need to put all code you use into a single runtest files, you can have helper scripts in the test directory (which is the case with xfstests you mentioned) you call. You can also use common libraries that will be installed together with the test (from different place) using the RhtsRequires clause.

fields

  • <environment>:METADATA = name of file that contains metadata
  • makefile:TESTVERSION
  • makefile:TOPLEVEL_NAMESPACE = location where this test is
  • makefile: PACKAGE_NAME = name of test
  • $METADATA:Owner = owner/maintainer of the test
    • value = name <email>
  • $METADATA:Name = test name
  • $METADATA:Path = test directory (TEST_DIR)
  • $METADATA:TestVersion = version of test
  • $METADATA:RunFor = ???
  • $METADATA:License = license of test
  • $METADATA:Description
  • $METADATA:Architectures
  • $METADATA:TestTime = looks like an expected duration
  • $METADATA:Priority = test priority (can be: "Normal")
  • $METADATA:Confidential = (can be: "no")
  • $METADATA:Destructive = (can be: "no")
  • $METADATA:Requires = (can be acpica-tools) Looks like a package requirement

Some explanation about 'Requires': People sometimes put in more info into the test meta-data than the "required" fields suggest. In some cases this can be a residue of RHTS (Beaker predecessor) and copy pasting as Beaker is still compatible with it, in other cases they simply use some less known features or just go all out in writing the makefiles.

  • $METADATA:RhtsRequires = Specifies a library that should be installed for the test (I believe)

Note: RHTS stands for Red Hat Test System

  • $METADATA:Type = ??
  • README.md:Test Maintainer:

libraries

  • /usr/bin/rhts-environment.sh
  • /usr/share/beakerlib/beakerlib.sh

APIS

  • environment_var:$OUTPUTFILE - place where test strings should be written
    • example: echo "foo bar" | tee -a $OUTPUTFILE
  • shell function: rhts_submit_log()
    • usage: rhts_submit_log -1 /mnt/redhat/user/acpi/acpitable.log
  • shell function: report_result()
    • report_result <test_name> [FAIL|PASS] <rcode>

LTP

Currently (as of September 2019), LTP has test meta-data inside the C files.

Test descriptions for each testcase are in comments in the C code.

Test dependencies are expressed inside the C file for each test, in the tst_test structure. For example needs_root is expresses there, as well as min_kver.

LTP has a new metadata project that is under construction, which generates data in json format for each test, based on the tst_test structure and structured comments.

See https://github.com/metan-ucw/ltp/tree/master/docparse

kselftest

Config fragments are called 'config' and are in each testing area's top-level directory.

Kees Cook just submitted a patch to introduce a file called 'settings' in the top-level directory for each testing area, which currently holds a field called 'timeout'. See https://lore.kernel.org/linux-kselftest/201909191359.1BFD926842@keescook/T/#t

files

  • Makefile
  • config
  • settings

fields

  • timeout (proposed, as of 9/2019)

Notes

Regarding additional kselftest meta-data, Kees wrote:

 I figured in the future it could hold details about expected environmental
 states (user, namespace, rlimits, etc). For example, I'd like to
 indicate that the seccomp tests should be run twice both as root and as
 a regular user.

Test Comparisons

OpenSSL

See OpenSSL test definition comparison

Sysbench

See Sysbench test definition comparison

Iperf

See Iperf test definition comparison

Test Job Requests

See Test Job Requests comparison