Yocto project survey response

From eLinux.org
Jump to: navigation, search

Yocto Project Testing survey response

Yocto project survey response provided by Richard Purdie and Tim Orling

Overview of Yocto Project Testing Structure/Terminology

Tests are performed on several levels:

  • oe-selftest tests the inner workings of the Yocto Project, OpenEmbedded
  • bitbake-selftest tests the inner workings of the BitBake environments
  • Build time testing (are generated binaries the right architecture?)
  • imagetest or oeqa/runtime tests that images boot and have functionality (if python is present does it work? does any toolchain build applications?) Can be virtual or hardware, either a static target or under control of e.g. LAVA
  • 'oeqa/sdktest and oeqa/esdktest tests SDK and eSDK toolchains (do the compilers work?)
  • ptest (package test) enables test suites from individual pieces of software to run
  • Build time performance testing
  • Yocto Project autobuilder - plugin for buildbot which runs our test matrix (build and runtime, all of the above)

Survey Questions

  • What is the name of your test framework? Yocto Project or OEQA

Which of the aspects below of the CI loop does your test framework perform? All except “Code Review” and “Lab/Board Farm”

Does your test framework:

source code access

  • access source code repositories for the software under test? Yes (bitbake fetcher)
  • access source code repositories for the test software? Yes
  • include the source for the test software? Yes (although ptest source is often brought in from upstream)
  • provide interfaces for developers to perform code reviews? No, this is done via patches sent to mailing lists
  • detect that the software under test has a new version? Partially. Auto Upgrade Helper (AUH) is available to test for new upstream versions. Can be configured to pull latest source e.g from a git repo (AUTOREV).
    • if so, how? Checks an upstream URL for a regex pattern (AUH) or scans source control for latest code at build time (AUTOREV).
  • detect that the test software has a new version? Partially. Test software comes from the upstream metadata repos just like other source, or from the software itself (ptest)

test definitions

Does your test system:

  • have a test definition repository? Yes
    • if so, what data format or language is used?

Mostly written in python unittest. ptest varies by the package under test, often shell script with an interface script to convert to our standardised ptest output format

Does your test definition include:

  • source code (or source code location)? in the original build metadata
  • dependency information? Yes. ptest packages are normal rpm/deb/ipk packages with dependencies.
  • execution instructions? Yes
  • command line variants? Yes (in the execution instructions)
  • environment variants? Yes (in the execution instructions)
  • setup instructions? Yes (in the execution instructions or as unittest setup methods)
  • cleanup instructions? Yes (in the execution instructions or as unittest cleanup methods)
    • if anything else, please describe:

Does your test system:

  • provide a set of existing tests? Yes
    • if so, how many? hundreds - see below
  • bitbake-selftest - 353 testcases
  • oe-selftest - 332 testcases
  • Imagetest - 49 testcases
  • sdk/eSDK tests - 9 testcases
  • eSDK tests - 16 testcases
  • ptest - 64 recipes have ptest packages in OE-Core (Note one ptest may encapsulate all of LTP)

build management

Does your test system:

  • build the software under test? It can but they are two separate phases, build, then test, each independent of the other..
  • build the test software? yes, when appropriate
  • oeselftest and oeqa are mostly python (interpreted) so N/A
  • ptest includes building the test runner script and tests into packages and them potentially including in an image for testing
  • build other software (such as the distro, libraries, firmware)? Yes. Yocto Project is a complete build system/environment.
  • support cross-compilation? Yes
  • require a toolchain or build system for the SUT? No.
  • require a toolchain or build system for the test software? No.
  • come with pre-built toolchains? It can be configured to use prebuilt toolchains or prebuilt objects from sstate.
  • store the build artifacts for generated software? Yes, as packages, images or as “sstate” (Yocto Projects shared state format of pre built objects)
    • in what format is the build metadata stored (e.g. json)? bitbake recipes (.bb files)
    • are the build artifacts stored as raw files or in a database? Either raw files, packages or sstate (tarball of files)
      • if a database, what database?

Test scheduling/management

OEQA will either bring up a virtual QEMU machine for testing (in which case it handles everything), assume that its free to use a machine at a given IP address (with custom hooks for provisioning/control) or rely on a third party system (e.g. LAVA) for provisioning/control.

Does your test system:

  • check that dependencies are met before a test is run? Yes
  • schedule the test for the DUT? Only through a third party system
    • select an appropriate individual DUT based on SUT or test attributes? Can select mechanism based on target machine type (QEMU or real board)
    • reserve the DUT? Only through a third party system
    • release the DUT? Only through a third party system
  • install the software under test to the DUT? Only through a third party system
  • install required packages before a test is run? Yes
  • require particular bootloader on the DUT? (e.g. grub, uboot, etc.) Only through a third party system
  • deploy the test program to the DUT? Yes
  • prepare the test environment on the DUT? Yes
  • start a monitor (another process to collect data) on the DUT? It could
  • start a monitor on external equipment? Only through a third party system
  • initiate the test on the DUT? Yes
  • clean up the test environment on the DUT? Yes

DUT control

Handled through any third party system

Run artifact handling

Does your test system:

  • store run artifacts Yes
    • in what format? Text log files
  • put the run meta-data in a database? No
    • if so, which database?
  • parse the test logs for results? Yes
  • convert data from test logs into a unified format?
    • if so, what is the format?

Aiming for json files. Currently test results are logged into testopia but its being replaced by a simpler mechanism using a git repository.

  • evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)? Yes
  • do you have a common set of result names: (e.g. pass, fail, skip, etc.) Yes
    • if so, what are they? Pass, Fail, Skip and Error (error means the testcase broke somehow)
  • How is run data collected from the DUT? Tests are run via ssh and the output logged, or log files transferred off the device using scp.
  • How is run data collected from external equipment? N/A
  • Is external equipment data parsed? N/A

User interface

Does your test system:

  • have a visualization system?

Buildbot provides our high level build/test status (https://autobuilder.yoctoproject.org/typhoon/#/console) We have graphical HTML emails of our build performance tests

  • show build artifacts to users? Yes
  • show run artifacts to users? Some of them
  • do you have a common set of result colors?
    • if so, what are they?
  • Green - All ok
  • Orange - Ok, but there were warnings
  • Red - There was some kind of failure/error
  • Yellow - In progress
  • generate reports for test runs? Yes
  • notify users of test results by e-mail? It can.
  • can you query (aggregate and filter) the build meta-data? No
  • can you query (aggregate and filter) the run meta-data? No, but you can query failures (http://errors.yoctoproject.org - our own error database system)
  • what language or data format is used for online results presentation? HTML
  • what language or data format is used for reports? Aiming for HTML emails and json
  • does your test system have a CLI control tool? Yes
    • what is it called? bitbake, oe-test, oe-selftest, bitbake-selftest

Languages:

  • what is the base language of your test framework core? python
  • What languages or data formats is the user required to learn? Python, json

Can a user do the following with your test framework:

  • manually request that a test be executed (independent of a CI trigger)? Yes
  • see the results of recent tests? Yes
  • set the pass criteria for a test? No
    • set the threshold value for a benchmark test? No
    • set the list of testcase results to ignore? No
  • provide a rating for a test? (e.g. give it 4 stars out of 5) No
  • customize a test? Yes
    • alter the command line for the test program? Yes
    • alter the environment of the test program? Yes
    • specify to skip a testcase? Yes
    • set a new expected value for a test? Yes
    • edit the test program source? Yes
  • customize the notification criteria? Yes
    • customize the notification mechanism (eg. e-mail, text) Yes
  • generate a custom report for a set of runs? no
  • save the report parameters to generate the same report in the future?

Planned through the json output files

Requirements

Does your test framework:

  • require minimum software on the DUT? no?
    • if so, what? Usually ssh

Entirely test case dependent. Basic Linux system is assumed for many tests (busybox shell and C library) but system has tested RTOS images over a serial connection before.

  • require minimum hardware on the DUT? no
    • If so, what? A network+ssh for many tests but some only need serial console access, really defined by the testcases and the way hardware is connected (e.g. LAVA).
  • require agent software on the DUT? no
    • If so, what agent? No agent required
  • is there optional agent software or libraries for the DUT? Eclipse plugin uses tcf-agent for development
  • require external hardware in your labs? Dependent on the hardware interface used (e.g. LAVA)

APIS

Does your test framework:

  • use existing APIs or data formats to interact within itself, or with 3rd-party modules?
  • Yes, python unittest with extensions for internal and external use
  • Yocto Project defined ptest results format.
  • have a published API for any of its sub-module interactions (any of the lines in the diagram)?
    • Please provide a link or links to the APIs?

https://wiki.yoctoproject.org/wiki/Ptest - See “What constitutes a ptest?” for standardised output definition

Sorry - this is kind of open-ended...

  • What is the nature of the APIs you currently use?

Python modules based around unitest along with standardised formats for output/logs (e.g. ptest output)

Relationship to other software:

  • what major components does your test framework use? Buildbot
  • does your test framework interoperate with other test frameworks or software?
    • which ones? Could use any other framework to control the hardware (e.g. LAVA)

Overview

Please list your major components here:

An overview of the testing that happens within the Yocto Project Follows

Our testing is orchestrated by a custom plugin to Buildbot:

yocto-autobuilder2 - http://git.yoctoproject.org/cgit.cgi/yocto-autobuilder2

This loads the test matrix configuration and some helper scripts from yocto-autobuilder-helper:

yocto-autobuilder-helper - http://git.yoctoproject.org/cgit.cgi/yocto-autobuilder-helper

which stores the test matrix configuration in a json file.

The web console interface can be seen at https://autobuilder.yoctoproject.org/typhoon/#/console

There are 35 different ‘targets’ such as nightly-arm, eclipse-plugin-neon, oe-selftest. These:

  • Build images, then run the oeqa ‘runtime’ image tests under qemu (including any ptests installed in the image)
  • Build SDK/eSDKs and then run the corresponding tests
  • Trigger bitbake-selftest and oe-selftest to execute
  • Build the eclipse plugins
  • Cover many different architecture and configuration settings (init systems, kernel version, C library etc.)

Builds can be marked as release builds and if they are, artefacts are published on a webserver and an email is sent to interested parties who can perform further QA. This may be done with a further buildbot instance which interfaces to real hardware through a LAVA plugin (Intel does this). There are some tests we haven't automated yet which are run manually by QA, we recently agreed to document these in a custom json format in tree alongside our other tests. All the tests can be see at: http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/lib/oeqa (see runtime/cases or manual)

In parallel we have dedicated machines which perform build time performance analysis and email the results to an email list: https://lists.yoctoproject.org/pipermail/yocto-perf/