Test Result Codes

From eLinux.org
Revision as of 20:18, 30 September 2019 by Tim Bird (talk | contribs) (buildbot: put colors in parens)
Jump to: navigation, search

A result code is a set of values from which a valid result is enumerated. The result or "status" of the test indicates the test outcome.

This status is usually restricted to one of a set of possible values. Due to different kinds of problems that can arise during testing, the result codes consist of more than just PASS and FAIL, which indicate success or failure for whatever the test is testing for.


Notes

xfail

  • xfail is something gcc does (besides pytest).
    • Tims comment: putting xfail in the test itself works for the developers of the test, but not for end-users running the test. How would the test know what failures an end-user wants to ignore for the moment?
  • pytest documentation for xfail: https://docs.pytest.org/en/latest/skipping.html

survey

This section is used to document the most common result codes, to try to harmonize the industry usage of these.

Where test systems have a visualization system, and standardized colors for the different result codes, those colors are noted.

LTP

  • TPASS Passed - (color: green) test was successful
  • TFAIL Failed - (color: red) test assertions failed
  • TSKIP Skipped - (color: yellow) Test was skipped because of missing pre-requisite or configuration
  • TWARN Warning - (color: magenta) test produced warnings - usually produced when test cleanup failed to restore the system
  • TBROK Broken - Broken is usually reported when test setup fails before the test even attempts to test the test assertions
  • TINFO Information - (color: blue)

Fuego

See http://fuegotest.org/wiki/run.json, the 'status' field:

  • PASS - (color: green) a testcase, test set or test suite completed successfully
  • FAIL - (color: light red) a testcase, test set or test suite was unsuccessful
  • ERROR - (color: dark red) a test did not execute properly (e.g. the test program did not run correctly)
  • SKIP - (color: yellow) a test was not executed, usually due to invalid configuration (missing some pre-requisite)

Jenkins

  • Stable - color: blue; everything passed
    • many people install the 'greenballs' plugin for this to be green
  • Unstable - color: grey; Test were successfully executed but found failures.
  • Failed - color: red; Problem with compilation / configuration / runtime error.
  • Aborted - color: grey; Build time-out, someone intentionally stopped the run in the middle.
  • Not executed yet - color: grey; Test has not been executed yet

pytest

Pytest has exit codes, with a particular meaning:

  • Exit code 0: All tests were collected and passed successfully
  • Exit code 1: Tests were collected and run but some of the tests failed
  • Exit code 2: Test execution was interrupted by the user
  • Exit code 3: Internal error happened while executing tests
  • Exit code 4: pytest command line usage error
  • Exit code 5: No tests were collected

pytest also has result codes. The letter is used with summary reports to filter which items are included in the summary:

  • failed (f)
  • error (E)
  • skipped (s)
  • xfailed (x)
  • xpassed (X)
  • passed (p)
  • passed with output (P) - I think this just controls output for the summary report.

buildbot

  • SUCCESS: Value: 0; (color: green) a successful run.
  • WARNINGS: Value: 1; (color: orange) a successful run, with some warnings.
  • FAILURE: Value: 2; (color: red) a failed run, due to problems in the build itself, as opposed to a Buildbot misconfiguration or bug.
  • SKIPPED: Value: 3; (color: white) a run that was skipped – usually a step skipped by doStepIf (see Common Parameters)
  • EXCEPTION: Value: 4; (color: purple) a run that failed due to a problem in Buildbot itself.
  • RETRY: Value: 5; (color: purple) a run that should be retried, usually due to a worker disconnection.
  • CANCELLED: Value: 6; (color: pink) a run that was cancelled by the user.

See http://docs.buildbot.net/latest/developer/results.html

kernelCI

Boot tests: Complete or Incomplete test job (LAVA produces some internal test results but these are generic and not related to the objective of the test job.)

Functional tests:

  • pass - (color: green)
  • fail - (color: red)
  • skip - (color: yellow)
  • unknown

LAVA

pass, fail, skip, unknown

Additionally, measurements and units can be recorded for any test result. URLs can be recorded as separate test result data.

Test Jobs also provide data on Complete or Incomplete. Incomplete test jobs record the error type and the error message.

ktest

pass, fail

labgrid

pass, fail, error, skip, xfail

(provided by pytest)

KFT

  • pass - color: green
  • fail - color: red
  • skip - color: yellow
  • xfail - color: blue, used for a failed test where the failure should be ignored
  • total - color: grey

opentest

  • pass = (color: green)
  • fail = (color: red)
  • skip = (color yellow)
  • not-run = (color: gray)
  • block = (color: blue) - test was blocked

tbot

No names. True/False only.

TCF

PASS, FAIL, ERRR, FAIL, SKIP, BLCK

  • PASS: all went well
  • FAILure: deterministic resolution of the test failing, eg we multiply 2*2 and yields 5 or power measurements from an attached gauge while doing op X yielded a power consumption outside of the expected band.
  • ERRoR: unexpected negative output (for example, a kernel crash while reading a file)
  • SKIP: the DUTs lack the capabilities needed to run such test) and we could only determine that once they were configured, setup and powered up (vs just looking at the metadata)
  • BLoCK: any problem related to infrastructure that disallowed from carrying the test to completion (eg: network failure communicating with the server, DUT power switch failing to execute a power up command, etc...)

Xilinx

PASS, FAIL, UNANALYZED

Yocto Project

Pass, Fail, Skip and Error (error means the testcase broke somehow)