Difference between revisions of "Test Stack Survey"

From eLinux.org
Jump to: navigation, search
(Requirements)
(Requirements)
Line 125: Line 125:
 
* require agent software on the DUT? (e.g. extra software besides production software)
 
* require agent software on the DUT? (e.g. extra software besides production software)
 
** If so, what agent?
 
** If so, what agent?
 +
* is there optional agent software or libraries for the DUT?
 
* require minimum hardware on the DUT (e.g. memory)
 
* require minimum hardware on the DUT (e.g. memory)
 
* require external hardware in your labs?
 
* require external hardware in your labs?

Revision as of 10:19, 14 September 2018

Attached please find a high-level diagram of one view of the test stack.

Diagrams

Here is a diagram for the high level CI loop:

high level CI loop



Cover text

Hello Test Framework developer or user,

The purpose of this survey is to try to understand how different Test Frameworks and Automated Test components in the Linux Test ecosystem work - what features they have, what terminology they use, and so forth. The reason to characterize these different pieces of software (and hardware) is to try to come up with definitions for a Test Stack, and possibly API definitions, that will allow different elements to communicate and interact. We are interested in seeing the commonalities and differences between stack elements.

This information will be used, to start, to prepare for discussions about test stack standards at the Automated Testing Summit 2018.

Please see the Glossary below for the meaning of words used in this survey. If you use different words in your framework for the same concept, please let us know. If you think there are other words that should be in the Glossary, please let us know.

Survey Questions

Overview

Please list the major components of your test system.

Just as an example, Fuego can probably be divided into 3 main parts, with somewhat overlapping roles:

  • Jenkins - job triggers, test scheduling, visualization, notification
  • Core - test management (test build, test deploy, test execution, log retrieval)
  • Parser - log conversion to unified format, artifact storage, results analysis

There are lots details omitted, but you get the idea.

Which of these aspects of the CI loop does your test framework perform:

The answers can be: "yes", "no", or "provided by the user".

If they are provided by a named component in your system (or by an external module), please provide the name.

Explanations are appreciated, where the answer is not simply yes or no. For example, in Fuego, Jenkins is used for trigger detection (that is, to detect new SUT versions), but the user must install the Jenkins module and configure this themselves.

Does your test framework:

  • detect that the software under test has a new version?
    • if so, how? (e.g. polling a repository, a git hook, scanning a mail list, etc.)
  • build the software under test?
  • schedule the test for the DUT?
    • select an appropriate individual DUT based on SUT or test attributes?
    • reserve the DUT?
    • release the DUT?
  • check that dependencies are met before a test is run?
  • install the software under test to the DUT?
  • power cycle the DUT?
  • add a new test program to the system?
  • publish a test (not run results, but the test itself)?
  • build the test program?
  • deploy the test program to the DUT?
  • monitor the power usage during a run?
  • gather a kernel trace during a run?
  • claim other hardware resources or machines (other than the DUT) for use during a test?
  • transfer the test logs from the DUT to an artifact store?
    • is this done by pushing from the DUT, pulling from the server, or having a 3rd machine move the data between them?
  • parse the test logs?
  • convert data from test logs into a unified format?
  • store build artifacts
    • in what format?
  • store run artifacts
    • in what format?
  • show build artifacts to users
  • show run artifacts to users
  • aggregate and filter build artifacts
  • aggregate and filter run artifacts
  • generate reports for test runs
  • notify users of test results by e-mail

Languages:

Examples: json, python, yaml, C, javascript, etc.

  • what is the base language of your test framework core?
  • what language or data format is used to store board configuration data?
  • what language or data format is used to store lab and/or external hardware configuration data?
  • what language or data format is used to store test configuration data?
  • what language or data format is used to store server configuration data?
  • what language or data format is used to store build artifacts?
  • what language or data format is used to store run artifacts?
  • what language or data format is used for online results presentation?
  • what language or data format is used for reports?

Which of these is the user required to learn?

Can a user do the following with your test framework:

  • request that a test be executed?
  • alter the command line for the test program?
  • save the altered command line for the test program (to share with others)?
  • adjust the environment of the test program?
  • save the altered environment for the test program (to share with others)?
  • see the results of recent tests?
  • set the pass criteria for a test?
    • set the threshold value for a benchmark test?
    • set the list of testcase results to ignore?
  • rate a test?
  • customize a test?
    • specify to skip a testcase?
    • edit the expected value for a test?
    • edit a test program?
    • specify to ignore a testcase result?
  • share information about testcases with other users?
    • share test program customizations?
    • share variants with other users?
    • share pass criteria with other users?
  • share tests with other users?
  • customize the notification criteria?
    • customize the notification mechanism (eg. e-mail, text)
  • generate a custom report for a set of runs?
  • save the report parameters to generate the same report in the future?

Requirements

Does your test framework:

  • require a toolchain or build system for the SUT?
  • require a toolchain or build system for the DUT?
  • require minimum software on the DUT?
    • If so, what? (e.g. POSIX shell, some other interpreter, specific libraries or tools, etc.)
  • require agent software on the DUT? (e.g. extra software besides production software)
    • If so, what agent?
  • is there optional agent software or libraries for the DUT?
  • require minimum hardware on the DUT (e.g. memory)
  • require external hardware in your labs?
  • require the user to learn or know a computer language?
    • if so, which one(s)?
  • require the user to learn or know a data format?
    • if so, which ones(s)?

APIS

Does your test framework:

  • use existing APIs or data formats to interact within itself, or with 3rd-party modules?
  • have a published API for any of it's sub-module interactions?
    • Please provide a link or links to the APIs?

Sorry - this is kind of open-ended...

  • What is the nature of the APIs you currently use?

Are they:

    • RPCs?
    • Unix-style? (command line invocation, while grabbing sub-tool output)
    • compiled libraries?
    • interpreter modules or libraries?
    • web-based APIs?
    • something else?

Relationship to other software:

  • what major components does your test framework use (e.g. Jenkins, Mondo DB, Squad, Lava, etc.)
  • does your test framework interoperate with other test frameworks or software?
    • which ones?

Test lifecycle diagram

Here is a diagram with the QA lifecycle for a test.

QA lifecycle

Glossary

  • Device under test (DUT) - the hardware or product being tested (consists of hardware under test and software under test) (also 'board', 'target')
  • Software under test (SUT) - the software being tested
  • Lab - a collection of resources for testing one or more DUTs (also 'board farm')
  • Provision (verb) - arrange the DUT and the lab environment (including other external hardware) for a test
    • This may include installing the SUT to the device under test and booting.
  • Dependency - indicates a pre-requisite that must be filled in order for a test to run (e.g. must have root access, must have 100 meg of memory, some program must be installed, etc.)
  • Test agent - software running on the DUT that assists in test operations (e.g. test deployment, execution, log gathering, debugging
    • One example would be 'adb', for Android-based systems)
  • Transport (noun) - the method of communicating and transferring data between the test system and the DUT
  • Serial console - the Linux console connected over a serial connection
  • Test program - a script or binary on the DUT that performs the test
  • Run (noun) - an execution instance of a test (in Jenkins, a build)
  • Request (noun) - a request to execute a test
  • Build server - a machine that performs builds of the software under test
  • Build artifact - item created during build of the software under test
  • Run artifact - item created during run of the test program
  • Log - one of the run artifacts - output from the test program or test framework
  • Boot - to start the DUT from an off state
  • Deploy - put the test program on the DUT
    • this one is ambiguous - some people use this to refer to SUT installation, and others to test installation
  • Pass criteria - set of constraints indicating pass/fail conditions for a test
  • Result - pass/fail (or something else) for a Run
  • Variant - arguments or data that affect the execution and output of a test (e.g. test program command line; Fuego calls this a 'spec')
  • Monitor - a program or process to watch some attribute (e.g. power) while the test is running
    • This can be on or off the DUT.
  • Trigger - an event that causes the CI loop to start
  • DUT controller - program and hardware for controlling a DUT (reboot, provision, etc.)
  • DUT scheduler - program for managing access to a DUT (take offline, make available for interactive use, schedule tests)
  • Report generation - generation of run data into a formatted output
  • Results query - Selection and filtering of data from runs, to find patterns
  • Visualization - allowing the viewing of test artifacts, in aggregated form (e.g. multiple runs plotted in a single diagram)
  • Notification - communication based on results of test (triggered by results and including results)
  • Bisection - automatic testing of SUT variations to find the source of a problem
  • Log Parsing - extracting information from a log into a machine-processable format (possibly into a common format)

a couple of miscelaneous notes

  • A Linux boot test is kind of strange, in that the software under test (the Linux kernel) is also the test program (the program that performs the action).
    • Maybe in this case, the test program does not reside on the DUT.
    • Fuego tests technically are composed of a host-side script and (usually) a DUT-side test program