Difference between revisions of "Test Standards"
Line 20: | Line 20: | ||
** ex: chart_config.json | ** ex: chart_config.json | ||
* instructions | * instructions | ||
+ | ** what tests can be skipped etc. | ||
− | + | == Test dependencies == | |
* how to specify test dependencies | * how to specify test dependencies | ||
** ex: assert_define ENV_VAR_NAME | ** ex: assert_define ENV_VAR_NAME | ||
Line 28: | Line 29: | ||
See [[Test_Dependencies]] | See [[Test_Dependencies]] | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
Line 49: | Line 43: | ||
*** ex: 'make test' | *** ex: 'make test' | ||
* test phases | * test phases | ||
+ | |||
+ | = Build Artifacts = | ||
+ | * test package format | ||
+ | ** meta-data for each test | ||
+ | ** test results | ||
+ | ** baseline expected results for particular tests on particular platforms | ||
+ | |||
= Run Artifacts = | = Run Artifacts = | ||
Line 55: | Line 56: | ||
* monitor results (power log, trace log) | * monitor results (power log, trace log) | ||
* snapshots | * snapshots | ||
+ | |||
== Results Format == | == Results Format == | ||
Line 67: | Line 69: | ||
See [[Test Result Codes]] | See [[Test Result Codes]] | ||
− | = | + | = Pass Criteria = |
− | * test | + | * what tests can be skipped (this is more part of test execution and control) |
− | * | + | * what test results can be ignored (xfail) |
− | * | + | * min required pass counts, max allowed failures |
− | * | + | * thresholds for measurement results |
− | + | ** requires testcase id, number and operator | |
− | ** |
Revision as of 12:41, 15 November 2018
This page will be used to collect information about test standards.
Contents
meta-documents
- https://tools.ietf.org/html/rfc2119 - IETF MUST, SHALL, MAY, etc. wording standards
A survey of existing test systems was conducted in the Fall of 2018. The survey and results are here: Test Stack Survey
Here are some things we'd like to standardize in open source automated testing:
Terminology and Framework
- test nomenclature (test glossary)
- CI loop diagram
Test Definition
- fields
- file format (json, xml, etc.)
- meta-data
- visualization control
- ex: chart_config.json
- instructions
- what tests can be skipped etc.
Test dependencies
- how to specify test dependencies
- ex: assert_define ENV_VAR_NAME
- ex: kernel_config
- types of dependencies
Test Execution API (E)
- test API
- host/target abstraction
- kernel installation
- file operations
- console access
- command execution
- test retrieval, build, deployment
- test execution:
- ex: 'make test'
- test execution:
- test phases
Build Artifacts
- test package format
- meta-data for each test
- test results
- baseline expected results for particular tests on particular platforms
Run Artifacts
- logs
- data files (audio, video)
- monitor results (power log, trace log)
- snapshots
Results Format
- test log output format
- counts
- subtest results
- Candidate formats:
One aspect of the result format is the result or status code for individual test cases or the test itself. See Test Result Codes
Pass Criteria
- what tests can be skipped (this is more part of test execution and control)
- what test results can be ignored (xfail)
- min required pass counts, max allowed failures
- thresholds for measurement results
- requires testcase id, number and operator