Iperf test definition comparison
This page has a comparison between the test definitions from Fuego and Linaro for the OpenSSL test.
Differences
- Fuego only runs ...
- Linaro runs ...
High Level Assumptions
- Fuego does not disturb the system
- if something is installed, it is removed, by default
- if something is started, it is stopped
- Fuego assumes you can run another test upon completion of one test
- Linaro assumes a clean install, that will be replaced on next test
- Things can be modified (packages installed, and forgotten about)
- Fuego treats system like final product that is immutable
- Linaro treats system like development system, that is mutable
preparation
building
- Fuego cross-builds the test software
- Linaro does not build the software
Pre-requisites
- Fuego checks for cross-compiler variables
- Linaro checks for root account
Alterations
- Linaro can install packages required by openssl on the board
- Fuego deploys the test software to the board
Execution
- Linaro runs test for each crypto algorithm separately
- Fuego runs test for all crypto algorithms together
- Factorization of the test is different
- dependency check, alterations, test execution, parsing are done on board for Linaro
- dependency check, test execution, parsing are done on the Host for Fuego
Parsing
- Linaro parses the output for each crypto test on the target using awk
- Fuego parses the combined output on the host using python (parser.py)
Results
- output is different
Presentation
- Linaro doesn't include presentation control for the test results in the test
Metadata
- Fuego specifies author, license gitrepo, for test program
- Linaro specifies the devices for the test to run on
- Linaro specifies distros where test can run
questions:
- Linaro install_deps: does this also install the package itself (with the openssl binary)?
- Linaro: what does send-to-laval.sh do?
Field comparisons
Field items | ||||
---|---|---|---|---|
Fuego | Linaro | Notes | ||
item | use | item | use | |
fuego_test.sh:test_pre_check | check required test pre-requisites (none in this test) | sysbench.sh:! check_root && error_msg | check required test pre-requisites | Linaro code is inline in test script |
fuego_test.sh:test_build | cross-build the test program from tar | sysbench.sh:install_sysbench | download and build the test program | - |
- | - | sysbench.sh:install_sysbench | install required packages for build | Linaro has different build and dependency info per distro, Fuego has no notion of installing auxiliary packages on the board |
fuego_test.sh:test_deploy | Put test program on the board | sysbench.sh:install_sysbench | install test program on board (locally) | |
fuego_test.sh:test_run | instructions to execute the test program on the board | sysbench.yaml:run:steps: | instructions to execute the test program on the board | - |
parser.py | code to parse the test program log | sysbench.sh:general_parser and awk lines | code to parse the test program log | Linaro parsing is done on board |
spec.json | indicates values for test variables (none for this test) | sysbench.sh:NUM_THREADS= | indicates values for test variables | Linaro options are read on command line of test script |
test.yaml:fuego_package | indicates type/format of test | sysbench.yaml:metadata:format | indicates type/format of test | - |
test.yaml:name | name of test | sysbench.yaml:metadata:name | name of test | similar |
test.yaml:description | description of test | sysbench.yaml:metadata:description | description of test | similar |
test.yaml:license/author/version | test program information | - | - | Informational data |
test.yaml:maintainer | Maintainer of this Fuego test | sysbench.yaml:metadata:maintainer | Maintainer of this Linaro test | similar |
test.yaml:fuego_release | Fuego revision of this test | - | - | - |
test.yaml:type | type of test | sysbench.yaml:metadata:scope | type of test? | - |
- | - | sysbench.yaml:metadata:os | OSes that this test can run on | Linaro only? |
- | - | sysbench.yaml:metadata:devices | devices that this test can run on | Linaro only? (Fuego board selection is done by user when creating jobs for boards?) |
test.yaml:tags | tags for this test | - | - | Fuego only? |
test.yaml:params | test variable names, values, options (note: none in this test) | sysbench.yaml:params | test variable names and values | similar |
test.yaml:gitrepo | upstream git repository for test program | - | - | Fuego only? |
test.yaml:data_files | manifest used for packaging the test | - | - | Fuego only? |
Fuego source
fuego_test.sh
tarball=iperf-2.0.5.tar.gz function test_build { # get updated config.sub and config.guess files, so configure # doesn't reject new toolchains cp /usr/share/misc/config.{sub,guess} . ./configure --host=$HOST --build=`./config.guess` sed -i -e "s|#define bool int|//#define bool int|g" config.h make config.h sed -i -e "s/#define HAVE_MALLOC 0/#define HAVE_MALLOC 1/g" -e "s/#define malloc rpl_malloc/\/\* #undef malloc \*\//g" config.h sed -i -e '/HEADERS\(\)/ a\#include "gnu_getopt.h"' src/Settings.cpp make } function test_deploy { put src/iperf $BOARD_TESTDIR/fuego.$TESTDIR/ } function test_run { cmd "killall -SIGKILL iperf 2>/dev/null; exit 0" # Start iperf server on Jenkins host iperf_exec=`which iperf` if [ -z $iperf_exec ]; then echo "ERROR: Cannot find iperf" false else $iperf_exec -s & fi assert_define BENCHMARK_IPERF_SRV if [ "$BENCHMARK_IPERF_SRV" = "default" ]; then srv=$SRV_IP else srv=$BENCHMARK_IPERF_SRV fi report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./iperf -c $srv -t 15; ./iperf -c $srv -d -t 15" $BOARD_TESTDIR/fuego.$TESTDIR/${TESTDIR}.log } function test_cleanup { kill_procs iperf }