Difference between revisions of "Test Standards"

From eLinux.org
Jump to: navigation, search
(Test dependencies)
(Results Format)
Line 60: Line 60:
 
** Candidate formats:
 
** Candidate formats:
 
*** https://testanything.org/
 
*** https://testanything.org/
 +
=== result codes ===
 +
A result code is a set of values from which a valid result is enumerated.
 +
 +
That is, the result of a testcase can be one of a set of values.
 +
 +
There are a lot of possible status results to indicate the test outcome.
 +
 +
==== examples survey ====
 +
===== LTP =====
 +
* TPASS
 +
* TFAIL
 +
* TSKIP
 +
* TBROK
 +
 +
==== Fuego ====
 +
See http://fuegotest.org/wiki/run.json, the 'status' field:
 +
* PASS - a testcase, test set or test suite completed successfully
 +
* FAIL - a testcase, test set or test suite was unsuccessful
 +
* ERROR - a test did not execute properly (e.g. the test program did not run correctly)
 +
* SKIP - a test was not executed, usually due to invalid configuration (missing some pre-requisite)
 +
 +
===== pytest =====
  
 
= Build Artifacts =
 
= Build Artifacts =

Revision as of 13:53, 13 November 2018

This page will be used to collect information about test standards.

meta-documents

A survey of existing test systems was conducted in the Fall of 2018. The survey and results are here: Test Stack Survey


Here are some things we'd like to standardize in open source automated testing:

Terminology and Framework

  • test nomenclature (test glossary)
  • CI loop diagram

Test Definition

  • fields
  • file format (json, xml, etc.)
  • meta-data
  • visualization control
    • ex: chart_config.json
  • instructions

Pass Criteria

  • what tests can be skipped (this is more part of test execution and control)
  • what test results can be ignored (xfail)
  • min required pass counts, max allowed failures
  • thresholds for measurement results
    • requires testcase id, number and operator

Test dependencies

  • how to specify test dependencies
    • ex: assert_define ENV_VAR_NAME
    • ex: kernel_config
  • types of dependencies

See Test_Dependencies

Test Execution API (E)

  • test API
  • host/target abstraction
    • kernel installation
    • file operations
    • console access
    • command execution
  • test retrieval, build, deployment
    • test execution:
      • ex: 'make test'
  • test phases

Run Artifacts

  • logs
  • data files (audio, video)
  • monitor results (power log, trace log)
  • snapshots

Results Format

result codes

A result code is a set of values from which a valid result is enumerated.

That is, the result of a testcase can be one of a set of values.

There are a lot of possible status results to indicate the test outcome.

examples survey

LTP
  • TPASS
  • TFAIL
  • TSKIP
  • TBROK

Fuego

See http://fuegotest.org/wiki/run.json, the 'status' field:

  • PASS - a testcase, test set or test suite completed successfully
  • FAIL - a testcase, test set or test suite was unsuccessful
  • ERROR - a test did not execute properly (e.g. the test program did not run correctly)
  • SKIP - a test was not executed, usually due to invalid configuration (missing some pre-requisite)
pytest

Build Artifacts

  • test package format
    • meta-data for each test
    • test results
    • baseline expected results for particular tests on particular platforms
    • what tests can be skipped etc.