Difference between revisions of "Test Standards"

From eLinux.org
Jump to: navigation, search
Line 20: Line 20:
 
** ex: chart_config.json
 
** ex: chart_config.json
 
* instructions
 
* instructions
 +
** what tests can be skipped etc.
  
=== Test dependencies ===
+
== Test dependencies ==
 
* how to specify test dependencies
 
* how to specify test dependencies
 
** ex: assert_define ENV_VAR_NAME
 
** ex: assert_define ENV_VAR_NAME
Line 28: Line 29:
  
 
See [[Test_Dependencies]]
 
See [[Test_Dependencies]]
 
== Pass Criteria ==
 
* what tests can be skipped (this is more part of test execution and control)
 
* what test results can be ignored (xfail)
 
* min required pass counts, max allowed failures
 
* thresholds for measurement results
 
** requires testcase id, number and operator
 
  
  
Line 49: Line 43:
 
*** ex: 'make test'
 
*** ex: 'make test'
 
* test phases
 
* test phases
 +
 +
= Build Artifacts =
 +
* test package format
 +
** meta-data for each test
 +
** test results
 +
** baseline expected results for particular tests on particular platforms
 +
  
 
= Run Artifacts =
 
= Run Artifacts =
Line 55: Line 56:
 
* monitor results (power log, trace log)
 
* monitor results (power log, trace log)
 
* snapshots
 
* snapshots
 +
  
 
== Results Format ==
 
== Results Format ==
Line 67: Line 69:
 
See [[Test Result Codes]]
 
See [[Test Result Codes]]
  
= Build Artifacts =
+
= Pass Criteria =
* test package format
+
* what tests can be skipped (this is more part of test execution and control)
** meta-data for each test
+
* what test results can be ignored (xfail)
** test results
+
* min required pass counts, max allowed failures
** baseline expected results for particular tests on particular platforms
+
* thresholds for measurement results
 
+
** requires testcase id, number and operator
** what tests can be skipped etc.
 

Revision as of 12:41, 15 November 2018

This page will be used to collect information about test standards.

meta-documents

A survey of existing test systems was conducted in the Fall of 2018. The survey and results are here: Test Stack Survey


Here are some things we'd like to standardize in open source automated testing:

Terminology and Framework

  • test nomenclature (test glossary)
  • CI loop diagram

Test Definition

  • fields
  • file format (json, xml, etc.)
  • meta-data
  • visualization control
    • ex: chart_config.json
  • instructions
    • what tests can be skipped etc.

Test dependencies

  • how to specify test dependencies
    • ex: assert_define ENV_VAR_NAME
    • ex: kernel_config
  • types of dependencies

See Test_Dependencies


Test Execution API (E)

  • test API
  • host/target abstraction
    • kernel installation
    • file operations
    • console access
    • command execution
  • test retrieval, build, deployment
    • test execution:
      • ex: 'make test'
  • test phases

Build Artifacts

  • test package format
    • meta-data for each test
    • test results
    • baseline expected results for particular tests on particular platforms


Run Artifacts

  • logs
  • data files (audio, video)
  • monitor results (power log, trace log)
  • snapshots


Results Format

One aspect of the result format is the result or status code for individual test cases or the test itself. See Test Result Codes

Pass Criteria

  • what tests can be skipped (this is more part of test execution and control)
  • what test results can be ignored (xfail)
  • min required pass counts, max allowed failures
  • thresholds for measurement results
    • requires testcase id, number and operator