Test definition survey

From eLinux.org
Revision as of 15:48, 20 September 2019 by Tim Bird (talk | contribs)
Jump to: navigation, search

Here is a list of test definition fields, attributes, file formats, operations, instructions, functions, etc. (I won't know what even what they consist of until I see them).

This is the object in your test system that "defines" what a test is. It likely has meta-data about the program to run, how to get the program started, maybe what things are required for the program to run, how the results should be interpreted, etc.

Test definition data survey

Each sub-section below has data about test definitions from a different system.

For many, there are links to the test definitions for that system: One simple, one characteristic, and link to repository containing lots of them.

Also, each sub-section has data that describes the files, fields and usage of the different fields in that system.

Fuego

fuego files

  • fuego_test.sh
  • spec.json
  • parser.py - has the testlog parser for this test
  • criteria.json - has the pass criteria for this test
  • test.yaml - has meta-data for this test
  • chart_config.json - has charting configuration
  • reference.json - has units for test results
  • docs - directory of test and testcase documentation
  • (test program source) - tarball, or repository reference for test program source
  • (patches against test program source) - changes for test program source

jenkins files

  • config.xml (job file) - Jenkins job description for a test ((board, spec, test) combination)

fields

  • config.xml::actions
  • config.xml::descriptions
  • config.xml::keepDependencies
  • config.xml::scm
  • config.xml::assignedNode - tag for which board or set of boards can run this job
  • config.xml::canRoam
  • config.xml::disabled
  • config.xml::blockBuildWhenDownstreamBuilding
  • config.xml::blockBuildWhenUpstreamBuilding
  • config.xml::triggers
  • config.xml::concurrentBuild
  • config.xml::customWorkspace
  • config.xml::builders
  • config.xml::hudson.tasks.Shell:command - Fuego command to run (includes board, spec, timeout, flags, and test)
  • config.xml::publishers
  • config.xml::flotile.FlotPlublisher
  • config.xml::hudson.plugins.descriptionSetter.DescriptionSetterPublisher(:regexp,:regexpForFailed,:description,:descriptionForFailed,:setForMatrix)
  • config.xml::buildWrappers
  • fuego_test.sh::NEED_* - 'need' variables for declarative dependency checks
  • fuego_test.sh::tarball - program source reference (can be local tarball or remote tarball, or url?)
  • fuego_test.sh::test_pre_check - (optional) shell function to test dependencies and pre-conditions
  • fuego_test.sh::test_build - shell function to build test program source
  • fuego_test.sh::test_deploy - shell function to put test program on target board
  • fuego_test.sh::test_run - shell function to run test program on the target board
  • fuego_test.sh::test_snapshot - (optional) shell function to gather machine status
  • fuego_test.sh::test_fetch_results - (optional) shell function to gather results and logs from target board
  • fuego_test.sh::test_processing - shell function to determine result
  • spec.json::testName - name of the test
  • spec.json::specs - list of test specs (variants)
  • spec.json::specs[<specname>].xxx - arbitrary test variables for the indicated test spec
  • spec.json::specs[<specname>].skiplist - list of testcases to skip
  • spec.json::specs[<specname>].extra_success_links - links for Jenkins display on test success
  • spec.json::specs[<specname>].extra_fail_links - links for Jenkins display on test failure
  • parser.py - python code to parse the testlog from (test_run::report(), report_live(), and log_this()) calls
  • reference.json::test_sets::name
  • reference.json::test_sets::test_cases - list of test cases in this test_set
  • reference.json::test_sets::test_cases::name
  • reference.json::test_sets::test_cases::measurements - list of measurements in this test case
  • reference.json::test_sets::test_cases::measurements::name
  • reference.json::test_sets::test_cases::measurements::unit
  • criteria.json::schema_version
  • criteria.json::criteria - list of results pass criteria
  • criteria.json::criteria::tguid - test globally unique identifier for this criteria
  • criteria.json::criteria::reference - reference condition for this criteria
  • criteria.json::criteria::reference::value - reference value(s) for this criteria
  • criteria.json::criteria::reference::operator - operator for this condition (eq, le, lt, ge, gt, bt, ne)
  • criteria.json::criteria::min_pass
  • criteria.json::criteria::max_fail
  • criteria.json::criteria::fail_ok_list
  • criteria.json::criteria::must_pass_list
  • test.yaml::fuego_package_version - indicates the version of package (in case of changes to the package schema). For now, this is always 1.
  • test.yaml::name - has the full Fuego name of the test. Ex: Benchmark.iperf
  • test.yaml::description - has an English description of the test
  • test.yaml::license - has an SPDX identifier for the test.
  • test.yaml::author -the author or authors of the base test
  • test.yaml::maintainer - the maintainer of the Fuego materials for this test
  • test.yaml::version - the version of the base test
  • test.yaml::fuego_release - the version of Fuego materials for this test. This is a monotonically incrementing integer, starting at 1 for each new version of the base test.
  • test.yaml::type - either Benchmark or Functional
  • test.yaml::tags - a list of tags used to categorize this test. This is intended to be used in an eventual online test store.
  • test.yaml::tarball_src - a URL where the tarball was originally obtained from
  • test.yaml::gitrepo - a git URL where the source may be obtained from
  • test.yaml::host_dependencies - a list of Debian package names that must be installed in the docker container in order for this test to work properly. This field is optional, and indicates packages needed that are beyond those included in the standard Fuego host distribution in the Fuego docker container.
  • test.yaml::params - a list of test variables that may be used with this test, including their descriptions, whether they are optional or required, and an example value for each one
  • test.yaml::data_files - a list of the files that are included in this test. This is used as the manifest for packaging the test.

Example

Here is an example test.yaml file, for the package Benchmark.iperf3:

fuego_package_version: 1
name: Benchmark.iperf3
description: |
    iPerf3 is a tool for active measurements of the maximum achievable
    bandwidth on IP networks.
license: BSD-3-Clause.
author: |
    Jon Dugan, Seth Elliott, Bruce A. Mah, Jeff Poskanzer, Kaustubh Prabhu,
    Mark Ashley, Aaron Brown, Aeneas Jaißle, Susant Sahani, Bruce Simpson,
    Brian Tierney.
maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
version: 3.1.3
fuego_release: 1
type: Benchmark
tags: ['network', 'performance']
tarball_src: https://iperf.fr/download/source/iperf-3.1.3-source.tar.gz
gitrepo: https://github.com/esnet/iperf.git
params:
    - server_ip:
        description: |
            IP address of the server machine. If not provided, then SRV_IP
            _must_ be provided on the board file. Otherwise the test will fail.
            if the server ip is assigned to the host, the test automatically
            starts the iperf3 server daemon. Otherwise, the tester _must_ make
            sure that iperf3 -V -s -D is already running on the server machine.
        example: 192.168.1.45
        optional: yes
    - client_params:
        description: extra parameters for the client
        example: -p 5223 -u -b 10G
        optional: yes
data_files:
    - chart_config.json
    - fuego_test.sh
    - parser.py
    - spec.json
    - criteria.json
    - iperf-3.1.3-source.tar.gz
    - reference.json
    - test.yaml

LAVA

For the 'files' part, each test in test-definitions is stored in a separate directory. The directory has to contain at least the YAML file that is compliant with LAVA test definition. We have a sanity check script (validate.py) that is executed on any pull request. This ensures all files pushed to the repository are compliant. Usual practice is that test directory contains a test script (shell script). Script is responsible for installing dependencies, running tests and parsing results. There is no mandatory format there but test-definitions provides a library of functions that help writing test scripts. There are libraries for 'linux' and 'android'. We also host directory for manual tests and simple executor for them but in the context of automated testing these are irrelevant.

files

  • <tesname>.sh - is the thing that will run on the target?
  • <testname>.yaml - describes test properties

fields

  • busybox.sh - shell code to execute things on target
  • testname.yaml::metadata::format - format of this yaml test definition file
  • testname.yaml::metadata::name - name of this test
  • testname.yaml::metadata::description - description of the test
  • testname.yaml::metadata::maintainer - list of email addresses of test maintainer(s)
  • testname.yaml::metadata::os - list of Linux distributions where this test can run
  • testname.yaml::metadata::scope - list of strings: can include: 'functional', 'performance', 'preemp-rt'
    • looks like set of tags?
    • testname.yaml::environment - list of something: lava-test-shell?
  • testname.yaml::metadata::devices - list of device types (board names) where this test can run
  • testname.yaml::PARAMS:: list of arbitrary test variables
  • testname.yaml::run - items for test execution
  • testname.yaml::run::steps - shell lines to execute the test (executed on target board)
  • libhugetlbfs.sh::VERSION, WORD_SIZE - test variables
  • libhugetlbfs.sh::libhugetlbfs_build_test - shell function to build libhugetlbfs software
  • libhugetlbfs.sh::libhugetlbfs_run_test - run run_tests.py, using variables, and call parse_output
  • libhugetlbfs.sh::libhugetlbfs_setup - prepare for test (mostly mount stuff)
  • libhugetlbfs.sh::libhugetlbfs_cleanup - unmount stuff and remove stuff
  • libhugetlbfs.sh::parser_output - parse output - generate result file in a specific format (one line per testcase?)
  • libhugetlbfs.sh::install - install pre-requisite packages
  • libhugetlbfs.sh::inline -- inline code to:
    • create output directory
    • check kernel config for required settings
    • call to install packages (pre-requisites)
    • setup
    • call to build_test
    • call to run_test
    • cleanup

Yocto Project

An "on target" test of the compiler:

(same directory has simple python/perl tests and so on)

http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/lib/oeqa/files (for the test files for context, they're just hello world examples)

This is a "selftest" for the "devtool" command that is part of the overall build system, its a bit more complex with shared functions and tests for each of devtool's subcommands.

This has all the test code and core test definitions. Test definitions are in cases directories under "manual", "runtime", "sdk" and "selftest" directories.

There's an article about Yocto ptest testing, available at: https://lwn.net/Articles/788626/

files

  • testname.py: python module with test code

fields

  • testname.py::setup
  • testname.py::teardown
  • gcc.py::OETestID - indicates numeric testcase id
  • gcc.py::OETestDepends -
  • gcc.py::OEHasPackage -
  • gcc.py::test_gcc_compile
  • gcc.py::test_gpp_compile
  • gcc.py::test_gpp2_compile
  • gcc.py::test_make

0-day

files

  • tests/iozone
  • jobs/iozone.yaml
  • pack/iozone
  • stats/iozone
  • pkg/iozone/PKGBUILD
  • pkg/iozone/iozone.install
  • pkg/iozone/iozone3_434.tar.sig

fields

  • pack/iozone::filename
  • pack/iozone::WEB_URL
  • pack/iozone::download - shell function to download the source
  • pack/iozone::build - shell function to build the source (for linux-AMD64)
  • pack/iozone::install - shell function to install binaries
  • pack/iozone::pack - shell function to create cpio file
  • test/iozone - shell script to execute iozone for each mount point
  • stats/iozone - ruby scrip to extra data from stdin and write aggregate data to stdout
  • PKGBUILD::validpgpkeys
  • PKGBUILD::pkgname
  • PKGBUILD::pkgver
  • PKGBUILD::pkgrel
  • PKGBUILD::pkgdesc
  • PKGBUILD::arch - tuple of supported architectures
  • PKGBUILD::url
  • PKGBUILD::license
  • PKGBUILD::depends
  • PKGBUILD::optdependes
  • PKGBUILD::install
  • PKGBUILD::source - indicate source url and signature
  • PKGBUILD::sha512sums
  • PKGBUILD::build - shell function to build linux
  • PKGBUILD::package - shell function to install files
  • jobs/iozone.yaml::suite
  • jobs/iozone.yaml::testcase
  • jobs/iozone.yaml::category
  • jobs/iozone.yaml::disk
  • jobs/iozone.yaml::fs - list of filesystems to test?
  • jobs/iozone.yaml::iosched - list of ioschedulers to test?


  • iozone.install::post_install
  • iozone.install::post_upgrade

CKI

files

  • makefile
  • README.md
  • runtest.sh

makefile has targets: run, build, clean

fields

  • <environment>:METADATA = name of file that contains metadata
  • makefile:TESTVERSION
  • makefile:TOPLEVEL_NAMESPACE = location where this test is
  • makefile: PACKAGE_NAME = name of test
  • $METADATA:Owner = owner/maintainer of the test
    • value = name <email>
  • $METADATA:Name = test name
  • $METADATA:Path = test directory (TEST_DIR)
  • $METADATA:TestVersion = version of test
  • $METADATA:RunFor = ???
  • $METADATA:License = license of test
  • $METADATA:Description
  • $METADATA:Architectures
  • $METADATA:TestTime = looks like an expected duration
  • $METADATA:Priority = test priority (can be: "Normal")
  • $METADATA:Confidential = (can be: "no")
  • $METADATA:Destructive = (can be: "no")
  • $METADATA:Requires = (can be acpica-tools) Looks like a package requirement
  • $METADATA:RhtsRequires = ??
  • $METADATA:Type = ??
  • README.md:Test Maintainer:

libraries

  • /usr/bin/rhts-environment.sh
  • /usr/share/beakerlib/beakerlib.sh

APIS

  • environment_var:$OUTPUTFILE - place where test strings should be written
    • example: echo "foo bar" | tee -a $OUTPUTFILE
  • shell function: rhts_submit_log()
    • usage: rhts_submit_log -1 /mnt/redhat/user/acpi/acpitable.log
  • shell function: report_result()
    • report_result <test_name> [FAIL|PASS] <rcode>

LTP

(this is currently unstructured)

New LTP metadata project: https://github.com/metan-ucw/ltp/tree/master/docparse

kselftest

Config fragments are called 'config' and are in each testing area's top-level directory.

Kees Cook just submitted a patch to introduce a file called 'settings' in the top-level directory for each testing area, which currently holds a field called 'timeout'.

files

config settings


Test Comparisons

OpenSSL

See OpenSSL test definition comparison

Sysbench

See Sysbench test definition comparison

Iperf

See Iperf test definition comparison

Test Job Requests

See Test Job Requests comparison