Test Dependencies

From eLinux.org
Revision as of 12:50, 22 August 2019 by Tim Bird (talk | contribs) (new docparse system)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This page is for organizing information about test dependencies, which are part of a test's Test Definition

Action Items

  • research kselftest dependency expressions
    • does kselftest only support config fragments?
    • how does its internal testrunner use this?
  • record information about the LTP docparse system
  • research and document LKFT test dependencies


A test dependency is an expression of a requirement or constraint for a test.

A test dependency can serve multiple purposes:

  • to filter the list of tests that apply to a target
  • to allow a test scheduler to select or locate an appropriate target for a test
    • or, to prevent a test from being associated with (or run on) a particular target or set of targets
  • to indicate that some action needs to occur before the test is run (part of test setup)
    • to indicate that some resources needs to be claimed before test start and released on test exit
    • to indicate tests or steps that must be performed before this test
  • to document what the test is related to (the services it needs, or the bug it checks for)


  • needs root permissions in order to run
  • needs a particular kconfig option configuration
  • needs a package installed on the target
  • needs a program on the target
  • needs a library in the SDK for the target (for the build phase)

From LTP docparse README.md document:

  • Test needs at least 1GB or RAM.
  • Test needs a block device at least 512MB in size
  • Test needs a numa machine with two memory nodes and at least 300 free pages on each node
  • Test needs i2c eeprom connected on a i2c bus
  • Test needs two serial ports connected via null-modem cable

Random notes

  • to process these, the testrunner code is going to need to associate the dependencies with the test (name), which may force us to solve the 'tguid' problem.
  • this is most likely part of:
    • interface C (between test definition repository and test manager), or
    • interface E (between test manager and test scheduler)

Expressions in different systems


See [1]

Some Fuego dependency statements are declarative, and some are imperative.

Fuego has a test_pre_check phase, in which dependencies are checked, and in which arbitrary code can be executed to check that required pre-requisites for the test are met.

Fuego stores dependencies in the fuego_test.sh file.

Declarative dependencies can be placed anywhere in the file. Imperative dependencies (pre_check code) is placed in the function test_pre_check() in fuego_test.sh.


  • NEED_PROGRAM=expect
  • assert_has_program expect
  • assert_define "LINK_NETMASK"
  • is_on_target liblwip.so LIB_LWIP /lib:/usr/lib:/usr/local/lib
  • is_on_target_path time PROGRAM_TIME
  • is_on_sdk libjpeg.so LIBJPEG /lib:/usr/lib/:/usr/local/lib:/usr/lib/$ARCH-linux-*/:/usr/lib/$TOOLCHAIN-linux-*/


Yes, Fuego has both declarative (NEED_*) and imperative (is_on_target) dependency statements.


Declared as fields in a test program's tst_test structure.


  • needs_root = 1
  • needs_tmpdir = 1
  • needs_checkpoints = 1

new docparse system

LTP developers are working on a new system to add more dependency data to the C files, and parse it out into a json file, for communication with external test runners.

See https://github.com/metan-ucw/ltp/tree/master/docparse

Some notes:

  • the test runs the dependency check itself at test runtime
    • so the dependency information needs to be accessible to the test C code
  • plan to add handling of different dependencies based on test variants at a later time

From a conversation between Daniel and Cyril:

> [Comment] Sometimes, you may need to specify requirements
> conditionally. For example, if you pass parameter "-s", then you need
> "root" permissions. The same can happen with test variants.

I'm aware of this, the problem here is that to make things flexible enough we would need to be able to express quite complex structures.

So in the end we would probably need to embed json, toml or similar data format or maybe even domain-specific language to express the dependencies and doing it in a way that will not suck would be hard.

One problem here is that test requirements has to be available to the test at runtime, which means that the test library will have to be able to parse the test requirements as well. Which means that these has to be accessible somewhere from the C structures.

For the start I would like to pass the requirements we already have to the testrunner, once that is working we can proceed to moving test variants into the testcases and add per-variant requirements as well.

Test writers can add additional metadata by writing comments in a specific format layout. The format layout is an open question. At this point it's just plain text with sections that are started with a string enclosed in brackets. There is nothing that parses the text yet, so effectively there is no format.

What I have in mind is that there will be different sections encoded in different formats and each section would have it's handler that would parse the data. The test description would be in some kind of text markup, the test variants may be in a json array, etc. But at this point nothing is decided yet, as I said it's hard to come up with anything unless we have consumers for the data.

What is supported now (Aug 2019):

  • upstream commit ids that need to be in the kernel for the test to pass

(in particular for CVE tests this could be useful). There is a simple html table built from the json file, see:


What is planned:

  • a string for each variant that explains what that variant is testing.

I was thinking of [putting everything in the data structure, but] I do not like having the test description, (i.e. several paragraphs of markup text), stored in a C strings. It would be nice to have this printed on the -h switch, but it's not strictly required.

I even tried a version with a pre-proceesing step where I parsed all the information from a comment and build a header with C structures that would be included when a test is compiled, but that overly complicated the build.

So in the end I settled down for a middle ground, which is having requirements encoded in a C structures and documentation stored in a comments. I'm not 100% decided on the current split though.

[The JSON file is currently] 52Kb in size. I would expect that it will grow a bit when we add more documentation to tests etc, but it would be still in a single digits of megabytes, which I do consider small enough.

And given that I want to get rid of runtest files the idea is to build the database of all tests during the LTP build and install the file along with LTP. Then the testrunner will make subsets of tests based on that file and on some information supplied by user, e.g. tag, wildcard, etc. So the plan is that the testrunner will always get the full JSON.



  • need_modules: true
  • need_memory: 8G
  • need_cpu: 4
  • need_x: true
  • need_kconfig: CONFIG_MD_RAID456
  • need_kconfig: CONFIG_FTRACE=y
  • need_kconfig:
  • avoid_nfs: 1
  • need_kconfig (list of strings)


There seems to be some kind of calculated need_kconfig: This is from lkp-test/include/fs1/OTHERS <%=

   case ___
   when /cifs|cramfs|logfs|squashfs/
       "need_kconfig: CONFIG_#{___.upcase}"
       "need_kconfig: CONFIG_#{___.upcase}_FS"