Xilinx test survey response

From eLinux.org
Jump to: navigation, search

Xilinx test survey response

Xilinx survey response provided by Punnaiah Choudary Kalluri

Survey Questions

  • What is the name of your test framework? regression_xlnx

regression_xlnx is a command line based tool. Xilinx' In-house CI framework utilizes this tool for building the software and executing the test cases. It is currently and in-house problem that is available within Xilinx. (That is, it is not open source.

Note that the framework is not only for linux tests. It also has support for standalone(baremetal)/rtos/boot tests.

Which of the aspects below of the CI loop does your test framework perform?

Build/deploy/test/analyze/publish the results (see details below)

Does your test framework:

source code access

  • access source code repositories for the software under test? Yes
  • access source code repositories for the test software? Yes
  • include the source for the test software? Yes
  • provide interfaces for developers to perform code reviews? No. Not from the test framework
  • detect that the software under test has a new version?
    • if so, how?
  • detect that the test software has a new version? No

test definitions

Does your test system:

  • have a test definition repository?
    • if so, what data format or language is used (e.g. yaml, json, shell

script) Does your test definition include:

  • source code (or source code location)? Yes
  • dependency information? Not now
  • execution instructions? Yes
  • command line variants? Yes
  • environment variants? Yes
  • setup instructions? Yes
  • cleanup instructions? Not now
    • if anything else, please describe:

Does your test system:

  • provide a set of existing tests? Yes
    • if so, how many? Around 600 tests

build management

Does your test system:

  • build the software under test (e.g. the kernel)? Yes
  • build the test software? Yes
  • build other software (such as the distro, libraries, firmware)? Yes
  • support cross-compilation? Yes
  • require a toolchain or build system for the SUT?
  • require a toolchain or build system for the test software? Yes
  • come with pre-built toolchains? Yes
  • store the build artifacts for generated software? Yes
    • in what format is the build metadata stored (e.g. json)? Yaml/csv/html
    • are the build artifacts stored as raw files or in a database? Raw files
      • if a database, what database?

Test scheduling/management

Does your test system:

  • check that dependencies are met before a test is run? No
  • schedule the test for the DUT? Yes
    • select an appropriate individual DUT based on SUT or test attributes? Yes
      • How?

Each test case has its own metadata represented in yaml format. Board(DUT) information is also part of this metadata and some of the test cases can be run on multiple different DUTs. So, the yaml can also have the supported DUT list information.

The framework provides tql ( test query language) similar to sql for selecting the test cases by matching the tags that are available in the yaml files

    • reserve the DUT? Yes
    • release the DUT? Yes
  • install the software under test to the DUT? Yes
  • install required packages before a test is run? Yes
  • require particular bootloader on the DUT? (e.g. grub, uboot, etc.) Yes. but there is no requirement that after powerup bootloader is shown
  • deploy the test program to the DUT? Yes
  • prepare the test environment on the DUT? Yes
  • start a monitor (another process to collect data) on the DUT? No
  • start a monitor on external equipment? Yes
  • initiate the test on the DUT? Yes
  • clean up the test environment on the DUT? Yes

DUT control

Does your test system:

  • store board configuration data? Yes
    • in what format? Shell scripts
  • store external equipment configuration data? Yes
    • in what format? Shells scripts
  • power cycle the DUT? Yes
  • monitor the power usage during a run? Yes
  • gather a kernel trace during a run? Yes
  • claim other hardware resources or machines (other than the DUT) for use during a test? Yes
    • How?

Reservations are not done directly from the test framework but as part of the test case scripts. Each test case has its own requirements.

We have two options. 1. Our board farm control software provides command line interface/scripts to control the machine which is connected to the DUT. Each machine has two ethernet ports (one connected to the network and other one connected to the DUT), serial, jtag and usb ports.

2. Control the machine from the DUT using ssh/nc(net console) commands if the machine is directly connected to the DUT through ethernet.

  • reserve a board for interactive use (ie remove it from automated testing)? Test framework is just another user of boardfarm managing SW which is running commands automatically. Interactive use is supported by default
  • provide a web-based control interface for the lab? No
  • provide a CLI control interface for the lab? Yes

Run artifact handling

Does your test system:

  • store run artifacts? Yes
    • in what format? Txt files
  • put the run meta-data in a database? No
    • if so, which database? No database. Its just a test case folder.
  • parse the test logs for results? Yes
  • convert data from test logs into a unified format? Yes
    • if so, what is the format? Yaml, csv, html
  • evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)? Yes
  • do you have a common set of result names: Yes
    • if so, what are they? PASS, FAIL, UNANALYZED
  • How is run data collected from the DUT? Results will not be stored in DUT. It will be in the results folder
  • How is run data collected from external equipment? mostly using the ssh
  • Is external equipment data parsed? Yes

User interface

Does your test system:

  • have a visualization system? No
  • show build artifacts to users? Yes
  • show run artifacts to users? Yes
  • do you have a common set of result colors? yes
    • if so, what are they? pass-green, fail-red, unanalyzed-yellow
  • generate reports for test runs? Yes
  • notify users of test results by e-mail? Yes
  • can you query (aggregate and filter) the build meta-data? No
  • can you query (aggregate and filter) the run meta-data? No
  • what language or data format is used for online results presentation? HTML, java script
  • what language or data format is used for reports? web interface (HTML)
  • does your test system have a CLI control tool?
    • what is it called?

It is not part of the test framework but we have custom CLI tool which call the test framework for build and run

Languages:

Examples: json, python, yaml, C, javascript, etc.

  • what is the base language of your test framework core? Shell scripts, python and yaml
  • What languages or data formats is the user required to learn? shell scripting and pyhon


Can a user do the following with your test framework:

  • manually request that a test be executed (independent of a CI trigger)? Yes
  • see the results of recent tests? Yes
  • set the pass criteria for a test? Yes
    • set the threshold value for a benchmark test? Yes
      • How?

The threshold value is part of the test result validation specification. Each test case can have its own results validation script which is called after execution of the test. For example, if it's an iperf test, the validation script checks the tcp rx/tx throughputs and compares it against the targeted values within that script.

    • set the list of testcase results to ignore? Yes
  • provide a rating for a test? (e.g. give it 4 stars out of 5) No
  • customize a test? Yes
    • alter the command line for the test program? Yes
    • alter the environment of the test program? Yes
    • specify to skip a testcase? None has requested this feature yet but can be done
    • set a new expected value for a test? Yes
    • edit the test program source? Yes
  • customize the notification criteria?
    • customize the notification mechanism (eg. e-mail, text) No
  • generate a custom report for a set of runs? Yes
  • save the report parameters to generate the same report in the future? Yes

Requirements

Does your test framework:

  • require minimum software on the DUT? No
  • require minimum hardware on the DUT (e.g. memory) Yes
    • If so, what? (e.g. POSIX shell or some other interpreter, specific

libraries, command line tools, etc.) JTAG , serial and SOC or FPGA are the minimal requirement.

  • require agent software on the DUT? No
    • If so, what agent? N/A
  • is there optional agent software or libraries for the DUT? No
  • require external hardware in your labs? Yes

APIS

Does your test framework:

  • use existing APIs or data formats to interact within itself, or with 3rd-party modules? Yes, we use data formats
  • have a published API for any of its sub-module interactions (any of the lines in the diagram)? No
    • Please provide a link or links to the APIs?

Sorry - this is kind of open-ended...

  • What is the nature of the APIs you currently use?

Are they:

    • RPCs?
    • Unix-style? (command line invocation, while grabbing sub-tool

output)

    • compiled libraries?
    • interpreter modules or libraries?
    • web-based APIs?
    • something else?

Relationship to other software:

  • what major components does your test framework use (e.g. Jenkins, Mondo DB, Squad, Lava, etc.) Jenkins, custom boardfarm management/control tool, custom report generation tool
  • does your test framework interoperate with other test frameworks or software? Yes
    • which ones? we are doing proof of concept with using u-boot python test framework on the top of current boardfarm management/control tool

Additional Data