LTP survey response

From eLinux.org
Jump to: navigation, search

LTP survey response

LTP survey response provided by Cyril Hrubis

Survey Questions

  • What is the name of your test framework? LTP

Which of the aspects below of the CI loop does your test framework perform?

Most of the LTP value are actual testcases.

As for the CI loop, there is script and a binary that can run choosen (sub)set of testcases and produce text files with results and that's it. So about half of this questionare does not apply to LTP, and the second half applies a bit loosely, I've tried to fill it in anyways.

Also the current solution to run the tests has many flaws. For instance the logic that executes the tests runs on the same machine as the testcases, which is a big problem when the machine dies, which happens quite often for kernel regression tests. So I started to experiment with a different solution that would run testcases over ssh/serial console/etc. or spawn virtual machines for the testing.

However I do want to have a CI loop for the testsuite itself, which is one of the reasons why I'm working on replacement for the LTP testrunner. As a LTP maintainer I do care about catching regression in the test themselves, so I would like to have a set of refrence VMs that would run LTP latest git head from a time to time and compare it against last stable result set and report test regressions.

Does your test framework:

source code access

  • access source code repositories for the software under test? No.
  • access source code repositories for the test software? No.
  • include the source for the test software? No.
  • provide interfaces for developers to perform code reviews? No.
  • detect that the software under test has a new version? No.
  • detect that the test software has a new version? No.

Basically the current solution in LTP behaves like any other UNIX software. You download a tarball manually, unpack it, compile and install, and then you can run the testcases.

The "new" solution I started to experiment with however is planned to be able to install different LTP versions and even compare the results between them.

test definitions

Does your test system:

  • have a test definition repository? Yes.
    • if so, what data format or language is used (e.g. yaml, json, shell script)

LTP has so called runtest files, which are pretty simple text files where lines starts with test id (a string up untill a first whitespace) followed by a command line to execute, mostly a binary file name sometimes followed by parameters.

Does your test definition include:

  • source code (or source code location)? No.
  • dependency information? No, the test in LTP are self contained so there are no classical dependencies.

We do have some build dependencies though. Some tests needs certain devel libraries installed prior compilation, this is handled by a configure script.

Some tests does not make sense to be executed on older kernels or kernels that does not have certain functionality compiled in, this is detected on runtime by the test itself.

  • execution instructions? Yes, I suppose that the command line in runtest files does classify as execution instruction.
  • command line variants? Yes, We do have a command line in the runtest files and we do pass parameters to certain tests in this way.
  • environment variants? maybe

There are certain environment variables that can change the behavior of the LTP testcases but these are supposed to be configured once for the whole testrun and are exported by the script that runs the testcases.

  • setup instructions?
  • cleanup instructions? Unless something really horrible happens each LTP testcase does it's setup and cleanup itself.
    • if anything else, please describe:

Does your test system:

  • provide a set of existing tests? Yes.
    • if so, how many? More than 3000.

build management

Does your test system:

  • build the software under test (e.g. the kernel)? A few of our testcases needs additional kernel modules, these are build during the LTP compilation if the build system is instructed where to find kernel build system.
  • build the test software? You have to comile and install LTP yourself.
  • build other software (such as the distro, libraries, firmware)? No.
  • support cross-compilation? Yes, via autotools.
  • require a toolchain or build system for the SUT? No.
  • require a toolchain or build system for the test software? Yes, we need toolchain and some testcases are not build unless there are devel libraries present, for example libaio, libattr, libcap, etc. A few tests also needs kernel-devel package so that the build process produces kernel modues.
  • come with pre-built toolchains? No.
  • store the build artifacts for generated software? I do not think so.
    • in what format is the build metadata stored (e.g. json)?
    • are the build artifacts stored as raw files or in a database?
      • if a database, what database?

Test scheduling/management

Does your test system:

  • check that dependencies are met before a test is run? no
  • schedule the test for the DUT? no
    • select an appropriate individual DUT based on SUT or test attributes? no
    • reserve the DUT? no
    • release the DUT? no
  • install the software under test to the DUT? no
  • install required packages before a test is run? no
  • require particular bootloader on the DUT? (e.g. grub, uboot, etc.) no
  • deploy the test program to the DUT? no
  • prepare the test environment on the DUT? no
  • start a monitor (another process to collect data) on the DUT? no
  • start a monitor on external equipment? no
  • initiate the test on the DUT? no
  • clean up the test environment on the DUT? no

LTP does not do any of this.

DUT control

Does your test system:

  • store board configuration data? no
    • in what format?
  • store external equipment configuration data? no
    • in what format?
  • power cycle the DUT? no
  • monitor the power usage during a run? no
  • gather a kernel trace during a run? no
  • claim other hardware resources or machines (other than the DUT) for use during a test? no
  • reserve a board for interactive use (ie remove it from automated testing)? no
  • provide a web-based control interface for the lab? no
  • provide a CLI control interface for the lab? no

LTP does not do any of this.

Run artifact handling

Does your test system:

  • store run artifacts Yes.
    • in what format? Plain text files.
  • put the run meta-data in a database? No.
    • if so, which database?
  • parse the test logs for results? No. The test result is propageted via the test process exit value.
  • convert data from test logs into a unified format? No.
    • if so, what is the format?
  • evaluate pass criteria for a test? No, each test reports the status itself.
  • do you have a common set of result names: Yes.
    • if so, what are they? Passed, Failed, Skipped, Warning, Broken.

There is a subtle difference between Failed and Broken, Broken is usually reported when test setup failed before we even attempted to test the test assertions. Warning is usually produced when test cleanup failed to restore the system.

  • How is run data collected from the DUT? LTP just writes text files to disk, thats all.
  • How is run data collected from external equipment? Not applicable.
  • Is external equipment data parsed? Not applicable.

User interface

Does your test system:

  • have a visualization system? Not really, LTP can write (a bit ugly) html page with table of the results.
  • show build artifacts to users? No.
  • show run artifacts to users? No.
  • do you have a common set of result colors? The test themselves do, they color the messages printed into terminal.
    • if so, what are they? PASSED - green, FAILED - red, BROKEN - red, SKIPPED - yellow, WARNING - magenta, INFO - blue
  • generate reports for test runs? Yes, the runltp script writes text files with lists of executed tests and failed tests.
  • notify users of test results by e-mail? I think that the runltp script is supposed to be able to send results via email, but I doubt that it's working.
  • can you query (aggregate and filter) the build meta-data? No.
  • can you query (aggregate and filter) the run meta-data? No.
  • what language or data format is used for online results presentation? n/a
  • what language or data format is used for reports? n/a
  • does your test system have a CLI control tool? no?
    • what is it called? Not applicable.

Languages:

Examples: json, python, yaml, C, javascript, etc.

  • what is the base language of your test framework core? C and (POSIX) shell. The experiment for new LTP test execution framework is written in perl.
  • What languages or data formats is the user required to learn? None I can think of.

Can a user do the following with your test framework:

  • manually request that a test be executed (independent of a CI trigger)? no
  • see the results of recent tests? no
  • set the pass criteria for a test? no
    • set the threshold value for a benchmark test? no
    • set the list of testcase results to ignore? no
  • provide a rating for a test? (e.g. give it 4 stars out of 5) no
  • customize a test? no
    • alter the command line for the test program? yes, by editing runtest
    • alter the environment of the test program? no
    • specify to skip a testcase? no, by creating new scenario file
    • set a new expected value for a test? no
    • edit the test program source? yes, manually editing the LTP source
  • customize the notification criteria? no
    • customize the notification mechanism (eg. e-mail, text) no
  • generate a custom report for a set of runs? no
  • save the report parameters to generate the same report in the future? no

Requirements

Does your test framework:

  • require minimum software on the DUT? yes
    • if so, what?

We need portable shell interpreter at least, then certain tools are needed in order to be able to enable some specific testcases.

Quite a few testcases needs mkfs at least for ext2 for formatting a loopback devices and some may need additional utilities, i.e. quota tests needs quotactl. Usually when these utils are not installed the test will report Skipped status.

And of course we need the libraries we compiled LTP against, e.g. libaio, libcap, ...

  • require minimum hardware on the DUT? There are a few tests that will report Skipped status if there is not enough memory on the SUT and a few tests that will probably break horribly on a systems with small amount of memory, which should be fixed.
  • require agent software on the DUT? No.
    • If so, what agent?
  • is there optional agent software or libraries for the DUT? No.
  • require external hardware in your labs? Not applicable.

APIS

Does your test framework:

  • use existing APIs or data formats to interact within itself, or with 3rd-party modules? At this point no.

The experimental new LTP testrunner can output test results in json.

  • have a published API for any of its sub-module interactions? no
    • Please provide a link or links to the APIs?
  • What is the nature of the APIs you currently use? n/a/


Relationship to other software:

  • what major components does your test framework use (e.g. Jenkins, Mondo DB, Squad, Lava, etc.) None.
  • does your test framework interoperate with other test frameworks or software?

Well, LTP has been integrated in fair amount of test frameworks, unfortunatelly I do not know much about most of these.

I can maybe speak a bit about openQA, which is a CI we use in SUSE for testing distribution instalallation images. From one point of view it's very different from testing actual boards, since 90% of the tests runs in VMs and it's main purpose was to test that the ISO can install fine, so command line tests were added as an afterthough. However some of the more abstract patters for a CI loop would likely match with what you guys are doing.

Overview

Please list your major components here:

The current LTP solution

  • runltp - a script that pre-processes runtest files and calls ltp-pan
  • ltp-pan - a binary that executes tests one after another and writes test results

The experimental work-in-progress solution

  • runltp-ng - a perl script that has several plugins:
    • backend.pm - library that can run commands on remote machine, manage it (reboot, poweroff) currently it can spawn virtual machines, use serial console or ssh connection
    • utils.pm - library that can collet SUT information (memory, cpus, distribution, etc.), install LTP on SUT, and execute testcases
    • results.pm - library that can write down (serialize) tests results from perl data structures into JSON or HTML page

What is missing at this point is something that can analyze the actual results, so that we can, for example, compare different testruns to look for regressions, etc.

Additional Data