Ralphael Gago Castano said this:
We evaluated Fuego some time ago (in the Spring-Summer 2017), because we had a LAVA setup that had become hard to work with, but at the end we didn't switch because on Fuego the GIT repository was mounted on the Jenkin's instance filesystem, this prevented multiple users to develop on the same server.
I still found Fuego superior to LAVA on a lot of aspects for our use case, so taking time from here and there I developed similar Jenkins-based framework that has a very similar philosophy: A very-thin wrapper that delegates the hard work to Jenkins, but adds some features from LAVA like dynamic board scheduling.
Luckily my company has allowed me to open-source it, so because this framework is inspired in Fuego (some code is a direct copy) I thought that it would be a good idea to share it with you here. If you think there are cool ideas/concepts you have a reference implementation. If we get some criticism from experts on the field it is very welcome.
I will try to make a list of the most important features/differences:
- Like Fuego, it has small tooling coded in Python, tests are written in shell and uses Jenkins too.
- Test are generated by using shell script pieces (with json metadata for Jenkins consumption) that support including (copy pasting) other pieces for building a script and accumulating all the Jenkins metadata (e.g. each of these pieces can define parameters for Jenkins).
- There is an API containing a few board functions that the user has to provide (dut_cmd, dut_get, dut_put), so board functions are abstracted and tests are shareable. All the transports on Fuego are(were) part of the core, on hottest they are a "module", so the user can implement them or select one of the transports available.
- There are predefined milestones on the test sequence. The test writer can add functions before or after each of the milestones dynamically, e.g. you can call "add_step_before_power_on" and pass a function that crosscompiles a small executable and then call "add_step_before_test" and pass a function that copies the executable to the device, then use the executable inside a test.
- Allows powering on and off the boards and flashing.
- The generator supports include directories, so the board, powering, lab setup and firmware flashing implementation can be left on separate user-private repositories while still using shared tests.
- All the jobs(tests) are self-contained, they embed the board code into the job. This avoids having to mount git repositories on the server's filesystem at the expense of code duplication on Jenkins (handled by the tool, HDD is cheap). It allows tests from older versions to coexist unmodified on the same server too.
- It adds a mapping between boards and Jenkins nodes, so you can have some tests written for a specific board and some boards connected to the server. When you schedule a lot of tests for the same board Jenkins spreads the workload between all idle boards. This uses Jenkins labels under the hood.
- The testplans are implemented as Groovy pipelines that schedule a series of tests in parallel. These pipelines can have some tests serially executed in a well defined order, useful for cases where e.g. there is only one hardware resource/dongle on the test PC that many tests use.
- For now, it has no extra Jenkins dependencies, so it works with a bare modern Jenkins install with default plugins.
- Reporting is in a minimum-viable state now, it just generates a Junit report and builds a trivial gnuplot graph as a Jenkins artifact.
> By "the GIT repository" do you mean the Fuego core (fuego-core) repository or > are you talking about the fuego-ro folder where the board files are? It was this thread: https://lists.linuxfoundation.org/pipermail/fuego/2017-June/000717.html I don't have the whole picture of that evaluation on my mind now, as it was long ago, but our conclusion was that there was no way for multiple people to safely develop on the server at the same time without external coordination. If I remember correctly: - The board code was sourced, modifications there had to be coordinated. - Running on the same board required external coordination to don't collide as there was no scheduler/board allocation. On Hottest the tests are stored on the server, but there are three features that allow to safely develop tests on the server for multiple people: - Test jobs can be uploaded to own temporary folders (namespaces) to do some trial work and then deleted (this should be easy for Fuego too). - The boards are allocated dinamically, so you know for sure that when you run some test for some board you don't collide with another user. - The board code is not sourced, but dummy copied on each test. > Thanks a lot for releasing it as open source. It looks quite clean and nicely written. > My only concern is further fragmentation in the Linux testing arena. It would be nice to share some parts to avoid redundant work. > https://elinux.org/Automated_Testing_Summit You're welcome. I agree. I wouldn't have gone this way if I could. maybe I was wrong, but I thought that we needed some major changes and that you guys had already some systems deployed already working, so pushing all those changes would require a lot of coordination and risk of not being accepted. If Fuego at some point can do what we need I'd prefer to join efforts of course. > Q: can you use it without jenkins, from the command line alone? I guess not, at least not for now. Jenkins is used for dynamic board allocation. If I added a trivial modification on the generation tool generate a script file containing all the parameter values and Jenkins variables it could work provided that: -They source that file. -They have access to all the Jenkins variables that make possible to locate required small tools. But there is no use case for us requiring this and we'd lose the Jenkins scheduler. > I guess that this is similar to the prolog.sh in Fuego, isn't it? It's a more powerful mechanism IMO. You write the board and tests starting from 0. Then you can combine individual chunks (piece of code) for e.g.: -powering "on" and "off" your relays. -fetch files through wget -Flash e.g. via tftp (not provided as of now, as we use an internal tool) and boot. -ssh transport. Then all that logic gets into your test. Your test gets all the Jenkins parameters too e.g. which relay line are you using, the url of the files to get when flashing, the tftp addresses, the ssh parameters, etc. E.g. take a look at chunks/util/communication/booted-ssh-board[.sh|.json], any test including this gets the related parameters. The default value of those Jenkins parameters can be configured at the generation stage, so it isn't required to enter it any time. All these inclusions the only thing they did was to add functions at specific test steps/milestones, e.g. fetching calls "add_step_before_power_on", flashing calls "add_step_after_power_on", etc. > In Fuego you can choose and existing transport but you are also able to override the functions as well thanks to the overlays (ov) system. > ov_transport_cmd, ov_transport_get, ov_transport_put Yeah, but aren't you choosing something that you aren't going to use just to make Fuego happy and then using shell function shadowing? > Nice. Fuego also allows powering on and off the board (still a bit experimental, we want to use the pdudaemon) but not flashing it. > Apart from flashing, can it also provision (deploy the kernel and rootfs) the board using tftp and a network filesystem like LAVA? Anything that can be done with shell can be done, as the design is completely modular it's just a matter of adding the right steps at the right place. Useful modules can always be made part of the core if they are generic enough. > Is this part what enables having "multiple users developing on the same server"? No, It was having dynamic board allocation + namespacing (folders) + the board code duplicated into each test. Having the include directories allows you to keep your own server definitions, boards and modules (e.g. for own tools that no one is interested about) on private repositories. > Interesting approach. What happens when you want to update the job? Do you need to remove the previous results? No, it just updates, the results stay there. You get an automatic backup from the tool each time you upload something too. > Nice, I think that Jan-Simon used Jenkins labels for dynamic board scheduling in Fuego as well. But I have never tried it because I don't have the need. Yeah. This is how it's done on Hottest. This is a must for ourselves, both for allowing multiple users without board synchronization and to be able to speed up the nightly testing by throwing more hardware. > Sounds good. > We are planning to use a different approach. Basically it would be a fuego test calling fuego tests in order. This is possible because we can call tests from the command line. If a sizeable amount of tests are scheduled at the same time point, how are tests made not to collide when allocating boards to run in? > Default plugins: do you mean the "recommended plugins" button that appears the first time you run jenkins? Yes.
more daniel discussion
Regarding pulling in test pieces from a library, Daniel said:
hmm ok, I guess the difference then is that the test.sh works like the director of an orchestra, somehow like Fuego's main.sh. In other words, in Fuego we call main.sh which executes the test's phases (build, deploy, run, parse..). In your case, you would call test.sh directly and test.sh would decide what to do by using functions from the library. Is my understanding correct? Sounds like an object oriented approach (Fuego) vs an function oriented approach (Hottest). If I understood that correctly, probably your approach is more flexible. In fact, it looks like LAVA tests which define all of the stages. In Fuego, we just ask the test developers to "fill" a set of functions/hooks. I guess there is a trade-off there.
more explanation from Rafael
Regarding job parameters in Fuego:
Again I forgot most details of Fuego (it has been two years) but I remember that some parameters came from different places (board?), others from the FTC tool (you were dependent on that tool for many things). On Hottest every parameter is a Jenkins Job parameter, which has IMO the next advantages:
- Less things for the user to learn and for us to document.
- Parameters can have a help string.
- All parameters can be browsed visually on Jenkins.
- The standard Jenkins API can be used to trigger tests with different parameters
On Hottest once you have pushed your tests and testplans to the server everything operates as standard Jenkins.
On Hottest a monolithic (but dependency free) test script is generated containing everything. Think of the C preprocessor that just copy-pastes things on its "#include" directive. It's just that an "#include" for the prolog and epilog is added for you on every test. You can see the implementation here: https://github.com/HMSAB/hottest/tree/master/chunks/runtime
You can see at the footer.sh file that there is a part that calls all functions that the user registered. On header.sh there is all the core implementation.
BTW, these parts are not special and use the same tooling/rules. You can see that "header.json" adds Jenkins parameters just as any other "chunk" (.sh + .json) pair can do.