Difference between revisions of "Test Standards"

From eLinux.org
Jump to: navigation, search
(Build Artifacts)
(add subunit and junit formats and link to table with comparison)
Line 66: Line 66:
 
** subtest results
 
** subtest results
 
** Candidate formats:
 
** Candidate formats:
*** https://testanything.org/
+
*** [https://testanything.org/ TAP (TestAnythingProtocol)]
 +
*** [https://github.com/testing-cabal/subunit SubUnit)]
 +
*** JUnit
  
 
One aspect of the result format is the result or status code for individual
 
One aspect of the result format is the result or status code for individual
 
test cases or the test itself.
 
test cases or the test itself.
See [[Test Result Codes]]
+
See [[Test Result Codes]] and [https://gist.github.com/ligurio/5e972552c8b0d4f4b5e109564cbfe764 comparison of TAP, SubUnit and JUnit output formats].
  
 
= Pass Criteria =
 
= Pass Criteria =

Revision as of 06:59, 7 December 2018

This page will be used to collect information about test standards.

meta-documents

A survey of existing test systems was conducted in the Fall of 2018. The survey and results are here: Test Stack Survey


Here are some things we'd like to standardize in open source automated testing:

Terminology and Framework

Test Definition

  • fields
  • file format (json, xml, etc.)
  • meta-data
  • visualization control
    • ex: chart_config.json
  • instructions
    • what tests can be skipped etc.

Test dependencies

  • how to specify test dependencies
    • ex: assert_define ENV_VAR_NAME
    • ex: kernel_config
  • types of dependencies

See Test_Dependencies


Test Execution API (E)

  • test API
  • host/target abstraction
    • kernel installation
    • file operations
    • console access
    • command execution
  • test retrieval, build, deployment
    • test execution:
      • ex: 'make test'
  • test phases

Build Artifacts

  • test package format
    • meta-data for each test
    • test results
    • baseline expected results for particular tests on particular platforms

Test package format

This is a package intended to be installed on a target (as opposed to the collection of test definition information that may be stored elsewhere in the test system)

Run Artifacts

  • logs
  • data files (audio, video)
  • monitor results (power log, trace log)
  • snapshots


Results Format

One aspect of the result format is the result or status code for individual test cases or the test itself. See Test Result Codes and comparison of TAP, SubUnit and JUnit output formats.

Pass Criteria

  • what tests can be skipped (this is more part of test execution and control)
  • what test results can be ignored (xfail)
  • min required pass counts, max allowed failures
  • thresholds for measurement results
    • requires testcase id, number and operator