Test Stack Layers
Here is some information on standards for test stack layers.
Different people have different approaches to the different sub-tasks for an automated testing stack.
Contents
Andrew Murray (the Farm stack)
Andrew posted some thoughts on different layers (using a farm metaphor) Here is the original message: https://lists.yoctoproject.org/pipermail/automated-testing/2017-November/000134.html
Grass
The Grass - This is the bottom layer which provides the bare minimum software abstraction to the fundamental capabilities of the farm. This consists of physical devices and suitable drivers for them. The interface to higher layers is of the language of turn power port 2 on, bridge relay 5 on, etc. The goal here is for us to be able pick up some hardware off the shelf (e.g. a APC power switch) and to have a 'driver' already available from this community that fits into this view of the world. In order to achieve this it would be necessary to define categories of devices (power, relay, serial), the verbs (power cycle, power off, etc) and a way of addressing physical devices. I believe the LabGrid stack has a good model in this respect.
Cow
The Cow - This is the middle layer which uses the bottom layer and provides an abstraction of a board, this is where boards are defined and related to the physical devices. This layer would manage exclusive access to boards and self-tests. Users of this layer could find out what boards are available (access control), what their capabilities are and access to those capabilities. The language used here may be turn board on, get serial port, press BOOT switch, etc. The goal here is that we can work as a community to create definition of boards - a bit like device tree at a high physical level.
Dairy
The Dairy - This is the top layer which uses the middle layer. This is the application layer, applications can focus on their value-add which may be running tests (LAVA) or providing remote access to boards (perhaps with additional features such as 'workspaces' with prebuilt and predeployed software images).
Pawel Wieczorrk (SLAV stack)
From Pawel's presentation in ELC 2018: PDF (See also ELC_2018_Presentations for a link to the video.
In the SLAV stack, there is a DUT controller board for each DUT. They use the Muxpi as the DUT controller board.
Dryad
Manages single DUT
- Fully aware of its capabilities
- Requires only two interfaces
- Power supply
- Network connection (Ethernet)
A dryad can:
- Boot DUT (performs a full power cycle)
- login to DUT
- copy file to DUT
- copy file from DUT
- exec a command on DUT
A command line tool called 'stm' is used to multiplex an SDCard between the DUT and the test server. stm is also used to turn power on and off to a board. stm can also provide a sample of the power (current) used by the board.
stm commands:
- -clr = clear the display
- -cur = get reading of the current drawn by the DUT
- -cur-duration = set duration to record current sample
- -cur-get = get CSV of sample recorded
- -cur-influx = read current drawn by DUT and push it to influx db
- -cur-size = set sample size
- -cur-start = start sampling DUT current
- -cur-stop = stop sampling DUT current
- -dut = connect SD card to DUT
- -dyper1 = switch dyper1 to the given state (on/off)
- -dyper2 = switch dyper2 to the given state (on/off)
- -flash-firmware = flash the firmware with the indicate filename
- -hdmi = switch HDMI HOTPLUG pin (on/off)
- -led1 = set the color of led1 to an rgb value
- -led2 = set the color of led2 to an rgb falue
- -listen = set path to socket on which user and admin RPC interface will be served
- -m = time delay for the tick command
- -print = print text on the display
- -print-x = x coordinate for print command
- -print-y = y coordinate for print command
- -remote = path to socket to use as an RPC service, instead of local connection
- -serve = start RPC service
- -tick = power cycle the DUT
- -ts = connect SD card to test server
- -user-listen = path to socke on which user RPC interface will be served
A command line tool called 'fota' is used to ...?
Boruta
Manages access to multiple boards in a farm. Dryad farm management system
- Schedules requests
- Priority
- Device groups
- Delayed access
- Provides convenient access to selected Dryad
This supports reservations for the automated test system
Boruta:
- allocates boards (reserves them?)
- matches requirements
- prepares environment
- sets up tunnel
Verbs (APIS) supported:
- /reqs (get) - list all requests
- /reqs (post) - create a new request
- /reqs/list (post) - filter and list requests
- /reqs/{ReqID} (get) - get info about a request
- /reqs/{ReqID} (post) - update a request
- /reqs/{ReqID}/close (post) - close a request
- /reqs/{ReqID}/acquire_worker (post) - get info needed to access worker reserved by request
- /reqs/{ReqID}/prolong (post) - Sets job's deadline
- /workers (get) - list all workers
- /workers/list (post) - filter and list workers
- /workers/{WorkerUUID} (get) - get info about a worker
- /workers/{WorkerUUID}/setstate (post) - Set worker's state
- /workers/{WorkerUUID}/setgroups (post) - Set worker's group
- /workers/{WorkerUUID}/deregister (post) - deregister worker
A request has the following attributes:
- ID
- Priority
- Owner
- Deadline
- ValidAfter = time before which request will not execute
- State (one of WAIT, INPROGRESS, CANCEL, TIMEOUT, INVALID, DONE, FAILED)
- Job
- Caps
Weles
Welese is a lightweight testing framework, built on top of Boruta:
It provides a LAVA-like interface.
It has a YAML job definition that has actions executed on the DUT.
- Deploy
- Boot
- Test
- Collect logs and data
Weles does the following:
- parse the YAML job definition
- collect assets
- request DUT
- perform tests
Here is the Weles web API:
- /jobs (post) - add a new job
- /jobs/{JobID}/cancel (post) - cancel a job
- /jobs/list (post) - List and filter jobs
- /artifacts/list (post) - List and filter artifacts
Uses something called the ArtifactDB filesystem to transfer job (run?) artifacts Artifacts can be one of:
- IMAGE - image file
- RESULT - all outputs, files built during tests, etc.
- TEST - additional files uploaded by user for conducting the test
- YAML - yaml file describing the Weles Job
Perun
Perun does OS image testing
- schedules verification (based on new set of OS images)
Actions:
- Crawl URL
- report changes
- submit weles jobs
- collect artifacts
- interpret results
Tim Bird (Fuego stack)
This section may get long, but here goes:
First, Fuego has APIs in several areas, including APIs between:
- 1) tests and the core - these are shell functions, python libraries (for testlog parsing), and json files for test results, pass criteria, and yaml for test package definition
- 2) the core and jenkins - Jenkins provides a REST API, that fuego accesses using the python 'jenkins' module
- 3) the core and the Fuego server - The fuego server provides a REST API, that fuego accesses using the python 'requests' module
The core handles test execution, including the different phases of a test: source unpack, build, deploy, execute, collect results, parse results, analyze results The core is written in shell script, with some parts in python.
The core API is documented in the first few links at: http://fuegotest.org/wiki/Fuego_Documentation#Developer_Guide
Jenkins handles job execution, which includes: detection of trigger, scheduling jobs, provisioning of DUT (with Fuego helper scripts), display of results, user interface for launching jobs manually, notification of results. Many of these are set up and configured by the end user (that is, they are left as user tasks).
The Fuego server (which is only in prototype form at the moment) handles intra-lab operations, such as: request test execution on a remote machine, present list of ad-hoc tests, store requests and test results artifacts for sharing between labs, store pass criteria for sharing between labs. The model is a 'pull' model, in that each lab pulls a job in order to execute it.
board layer
- defines how to control board hardware
- verb is hardware_reboot
- also: board_setup, board_teardown
- defined how to access board functions (system logger)
- defines SDK for board
API consists of shell script functions, with the ability to override the functions in the board file (at least, that's the intent). From functions.sh:
- target_reboot - software reboot of DUT
- override layer function = ov_target_reboot
- ov_board_control_reboot - hardware reboot of DUT
override functions are ov_rootfs_
- cmd - execute a command on the DUT
- report - execute a command on the DUT, and collect it's output into the test log
transport layer
Defined in board file:
- defines how to communicate with board (e.g. ssh, serial)
- verbs are put, get, cmd
- also: transport_connect, transport_disconnect
API consists of shell script functions, with the ability to override the functions in the board file (at least, that's the intent). From functions.sh:
- get - get files from the DUT
- put - put files to the DUT
- cmd - execute a command on the DUT
- report - execute a command on the DUT, and collect it's output into the test log
- ov_transport_connect - prepare board for network connection
- ov_transport_disconnect - prepare board for network disconnection
board object
A board file is defined in shell syntax, and defines a set of variables that indicate what transport, board control, and toolchains to use for building software for the board, and operating the board during testing.
test layer
- Tests are defined with:
- a shell script to execute the test phases (with a shell function for each phase)
- a parser program, written in python
- pass criteria file (in json)
- test package definition (in yaml)
- chart configuration file (for controlling results display)
- test source code (tarball or git reference)
- Tests are built from source
- the 'ftc' tool is used to execute a test, which consists of several phases:
- pre_check - check that DUT has required attributes and is ready (matching)
- build - build the test software
- deploy - deploy the test software
- run - execute the test
- post_process - retrieve results
- log_compare - analyze results
user interface
Jenkins is used for job/board scheduling, and for results presentation.
It is also used for job triggering, provisioning, and notification, although all these are left as an exercise for the user.
Jenkins interfaces with the core via environment variables and command line executions. the core interfaces to Jenkins via REST APIS, and through a Jenkins plugin to put charts (graphs and tables) into the HTML output for a job.
The automated portion of the core interface to Jenkins is:
- abort job
The manual portion of the core interface to Jenkin is:
- list nodes
- list jobs
- add node
- add job
- add view
- build job
- remove job
- remove node
Jenkins retrieves chart data for a test (executed by a job), by calling mod.js (which is referenced by a custom flot.plugin)
Fuego server
The Fuego server acts as a kind of dumb store for Fuego artifacts, although it supports some filtering.
These are implemented as plugins for the tbwiki wiki engine. Specifically:
- MacroFuegoRequestList - shows a list of request (json files in the 'requests' directory)
- MacroFuegoRunList - shows a list of runs (json files in the 'runs' directory)
- MacroFuegoTestList - shows a list of tests (ftp files in the 'tests' directory)
- ProcessorFuegoShow - shows an individual run, request, or test by dumping it's json or yaml data
- ProcessorFuego - manages Fuego server requests:
- put_test - put an ftp file in the 'tests' directory
- put_run - put a run json file in the 'runs' directory
- put_request - put a request json file in the 'requests' directory
- query_requests - retrieve a list of requests matching a query
- get_request - get a request
what's left to the user?
- detecting that the software under test has changed
- initiating the test (trigger)
- building the software under test
- deploying the software under test to the DUT (provisioning)
- configuring notification (in Jenkins) of results
Brainstorming
Stack elements:
- Board control
- Device under test
- connections
- report generator
- job dispatcher
- device database (has information about devices)
- Test
- Result
- change detector
- artifact storage
- Presentation service
- logs
- run artifacts (logs, traces, results)
- test artifacts (source, binary, meta-data)
objects:
- test
- test results
- text
- history
- tables
- charts
- results database
- board (node)
- host
- job
- run (build)
- request (run request)
- connection/channel
- facility
- user
- interface (results presentation interface)
- scheduler
- administrator
QA operations
- detect something has changed
- build software
- install it
- test it
- determine if a result matters
- (previously) establish a baseline behavior that is acceptable
- match current behavior with baseline behavior
- determine if a result matters
- save results
- examine results
- report problems
- find problems
- fix problems
uncategorized list of actions
- reboot the DUT
- power on to DUT
- power off to DUT
- read serial console of DUT
- build test software
- generate test data
- generate test variations
- build software under test
- build kernel under test
- build rootfs under test
- install software under test (provision)
- install kernel (update tftp, switch sdcard, install to /boot, etc.)
- install rootfs (update nfs area, write to sdcard, write to partitions, etc.)
- switch sdcard to host
- switch sdcard to DUT
- deploy test
- install test software
- collect log from test / send log from test
- parse log from test
- present test results to user (web interface)
- present table of past results
- present chart of past results
- detect software under test has changed
- schedule test for DUT
- reserve DUT
- match job request with DUT
- remove test from DUT
- determine pass/fail from test log
- determine if metric is over/under threshold
- determine if required tests passed (allow individual test cases to be ignored)
- detect if DUT is working
- store test artifacts
- store test results
- notify user of test failure
- notify user of board failure
- bisect software under test
- press/hold button on DUT
- customize the test's expected value
- customize the pass criteria for a test
- customize a benchmark's threshold
- customize the 'ignore result' list for a test
- customize the pass count for a test
- customize the parameters to a test
- add an testcase to the skip list for a test
- create a variation of a test
- specify different parameters to a test (in Fuego, the spec, or the board, or dynamic test vars)
- edit a test variation
Here are a bunch of miscellaneous ones:
- update expected results for a test
- monitor power while test is running
- validate data from test
- disconnect/reconnect power during test
- disconnect/reconnect bus (USB, CAN, i2C, etc.) during test
- disconnect/reconnect network during test
- change battery load during test
- monitor temparature during test
- capture video during test
- capture audio during test
- capture trace during test
- control user interface during test
- install host software needed for test
- set up network server for test
tool operations
Operations supported by different tools:
ttc
- console - access the target serial console
- cp - Copy files to or from the target.
- fsbuild - Build root filesystem for use on target.
- fsinstall - Install root filesystem for use on target.
- get_config - Install kernel config for target in the $KBUILD_OUTPUT directory
- get_kernel - Install kernel sources for target in the $KERNEL_SRC directory
- info - Show information about a target.
- kbuild - Build kernel from source.
- kinstall - Install kernel for use on target.
- list - Show a list of available targets.
- login - access a network login on the target.
- off - Turn off target board.
- on - Turn on target board.
- pos - Show power status of target board.
- reboot - Reboot target board.
- release - Release a reservation of a target.
- reserve - Reserve a target for use.
- reset - Reset target board.
- rm - Remove files from the target.
- run - Run a command on the target.
- set_config - Set one or more individual config options
- setenv - Prepare environment for building for target.
- status - Show status of target, including reservations. (not implemented yet)
- wait_for - Wait for a condition to be true.
labgrid
labgrid-client
- monitor - Monitor events from the coordinator
- resources - List available resources
- places - List available places
- show - Ahow a place and related resources
- create - Add a new place
- delete - Delete an existing place
- acquire - Acquire a place
- release - Release a place
- env - Generate a labgrid environement for a place
- power - Change or get a place's power status
- power get
- power on
- power off
- power status
- console - Connect to the console
- fastboot - Run fastboot
- bootstrap - Start a bootloader
- io - Interact with Onewire devices