Difference between revisions of "Test Results Format Notes"

From eLinux.org
Jump to: navigation, search
(Created page with "This document has information about various test results formats, and their strengths and weaknesses. = Introduction = The results format is the output from the test and crea...")
 
Line 10: Line 10:
 
A good starting document that describes different test report formats is:
 
A good starting document that describes different test report formats is:
 
* https://github.com/ligurio/testres/wiki/Everything-you-need-to-know-about-software-testing-report-formats
 
* https://github.com/ligurio/testres/wiki/Everything-you-need-to-know-about-software-testing-report-formats
 +
* [https://gist.github.com/ligurio/5e972552c8b0d4f4b5e109564cbfe764 comparison of TAP, SubUnit and JUnit output formats].
 +
  
 
= Existing output formats =
 
= Existing output formats =
Line 22: Line 24:
 
* result of the testcase (pass, fail, skip, error, xfail)
 
* result of the testcase (pass, fail, skip, error, xfail)
 
** See [[Test Result Codes]]
 
** See [[Test Result Codes]]
 +
 +
* additional information
 +
** counts (aggregate data)
 +
** subtest results
 +
** diagnostic information - general information that may help diagnose the test operation
 +
** reason - text explaining why a test passed or failed
  
 
== testcase identifiers ==
 
== testcase identifiers ==
 +
There should be a way to identify a test, so that when a test is repeated it can be determined
 +
if the test result changed or not.  The testcase identifier could be a number or a short name, or
 +
a description.  But it should be the same ever time the test is run (it should be invariant over
 +
test invocations).
 +
 +
Many test developers will change the output related to a testcase based on the testcase result.
 +
There needs to be a portion of the testcase output that is invariant, and which can be
 +
parsed to an identifier that is unique within a single run of the test.
  
 
== result strings ==
 
== result strings ==
 +
One aspect of the result format is the result or status code for individual
 +
test cases or the test itself.
 +
 +
=== Result codes ===
 
** See [[Test Result Codes]]
 
** See [[Test Result Codes]]
 +
* test log output format
 +
 +
=== Metric data ===
 +
Metric or measurement data is a a string indicating the value for an operation.  This is usually
 +
used for performance, timing or other number-related data (such as that reported by benchmarks)
 +
 +
The metric data needs to report a number, and most likely a 'units' indicating how the number
 +
should be interpreted.
 +
 +
There may also be associated with a metric (or measurement), some additional information indicating
 +
parameters associated with the metric used to determine whether the value indicates success or failure
 +
of the related testcase.
  
 
== parser helper information ==
 
== parser helper information ==
Line 36: Line 68:
 
ad-hoc consistency in its output.  It is much preferred when writing new tests to use one
 
ad-hoc consistency in its output.  It is much preferred when writing new tests to use one
 
of the existing test output formats.
 
of the existing test output formats.
 
* test log output format
 
** counts
 
** subtest results
 
 
 
One aspect of the result format is the result or status code for individual
 
test cases or the test itself.
 
 
[https://gist.github.com/ligurio/5e972552c8b0d4f4b5e109564cbfe764 comparison of TAP, SubUnit and JUnit output formats].
 
  
 
= More notes =
 
= More notes =

Revision as of 18:58, 26 September 2019

This document has information about various test results formats, and their strengths and weaknesses.

Introduction

The results format is the output from the test and creates is part of the interface between the test program and the test execution layer (or test harness).

The main thing that the format communicates is the list of testcases (or metrics, in the case of benchmarks) and the result of the testcase (pass, fail, etc.)

A good starting document that describes different test report formats is:


Existing output formats

Here are some of the existing formats that are used by various test programs and frameworks:

Elements

A test output format needs to communicate the following information:

  • testcase identifiers (names or descriptions or ID numbers)
  • result of the testcase (pass, fail, skip, error, xfail)
  • additional information
    • counts (aggregate data)
    • subtest results
    • diagnostic information - general information that may help diagnose the test operation
    • reason - text explaining why a test passed or failed

testcase identifiers

There should be a way to identify a test, so that when a test is repeated it can be determined if the test result changed or not. The testcase identifier could be a number or a short name, or a description. But it should be the same ever time the test is run (it should be invariant over test invocations).

Many test developers will change the output related to a testcase based on the testcase result. There needs to be a portion of the testcase output that is invariant, and which can be parsed to an identifier that is unique within a single run of the test.

result strings

One aspect of the result format is the result or status code for individual test cases or the test itself.

Result codes

Metric data

Metric or measurement data is a a string indicating the value for an operation. This is usually used for performance, timing or other number-related data (such as that reported by benchmarks)

The metric data needs to report a number, and most likely a 'units' indicating how the number should be interpreted.

There may also be associated with a metric (or measurement), some additional information indicating parameters associated with the metric used to determine whether the value indicates success or failure of the related testcase.

parser helper information

Some tests use simple line-based output. Here is an idea for how a program or log might provide information about it's output format, allowing the test framework to perform introspection on the logs.

Note that this is a fallback mechanism for when a test has already been written to with some ad-hoc consistency in its output. It is much preferred when writing new tests to use one of the existing test output formats.

More notes

TAP version 14

The effort to create TAP version 14 has stalled.

Version 14 was intended to capture current practices that are already in use. The pull request for version 14, and resulting discussion is at:

 * https://github.com/TestAnything/testanything.github.io/pull/36/files

You can see the full version 14 document in the submitter's repo:

 $ git clone https://github.com/isaacs/testanything.github.io.git
 $ cd testanything.github.io
 $ git checkout tap14
 $ ls tap-version-14-specification.md

Standards

For the Linux kernel selftests, the preferred output format is TAP (TestAnythingProtocol)