Buildroot:DeveloperDaysELCE2013

From eLinux.org
Jump to: navigation, search

Buildroot Developers Meeting, 26-27 October 2013, Edinburgh, UK

Event sponsored by Imagination Technologies.

The discussions at the buildroot developers meeting led to the following conclusions. See below for the details.

  • Google Summer of Code: this first project was a mixed success, but we will do it again next year.
  • Community organization:
    • Maintainers should take patches in ready-to-commit branches and send them out to the list, and update patchwork to supersede the patches you included.
    • Peter should force decisions on major changes when he is asked to.
    • Yann will add a feature to patchwork that makes Acked-by tags visible in the front-end web and pwclient interface.
    • We will continue the weeding of old patches with bi-weekly request for update.
    • We will try to get a "new contributor" to work on a checkpackage script.
  • We can probably remove the experimental tag from (e)glibc.
  • The BR2_EXTERNAL patches will be accepted (after some more modifications). The directory hierarchy in the external tree will be forced to follow the patches/, configs/ and board/ structure (though the board/ is of course not forced).
  • We will start adding 'demo' configs (i.e. non-minimal configs) in the configs/ directory. They may move to another place later.
  • We will add test cases in the support/test directory.
  • Support for maintaining package patches using quilt or git will be added when someone contributes scripts for it.
  • Config.in.legacy will be kept as it is now, until someone contributes a script that automatically updates .config instead.
  • We won't do anything for SPDX support for the time being. Only if a package carries a license that is not in our current list, we can use the SPDX shortname for it.
  • Yann will adapt and re-send the instrumentation patches.
  • The parallel top-level make patches will be accepted, except the one that removes .NOTPARALLEL.

What is Buildroot ?

Buildroot is a set of Makefiles and patches that makes it easy to generate a complete embedded Linux system. Buildroot can generate any or all of a cross-compilation toolchain, a root filesystem, a kernel image and a bootloader image. Buildroot is useful mainly for people working with small or embedded systems, using various CPU architectures (x86, ARM, MIPS, PowerPC, etc.) : it automates the building process of your embedded system and eases the cross-compilation process.

Location and date

The Buildroot community is organizing a meeting on Saturday 26th and Sunday 27th October 2013 in Edinburgh, for Buildroot developers and contributors. This meeting is a mixture of discussion and hacking session around the Buildroot project. This meeting takes place right after the Embedded Linux Conference Europe, in order to make it easy for participants to attend both events.

The meeting took place at the Edinburgh Training and Conference Venue (http://www.edintrain.com/), 16 St. Mary's Street, Edinburgh. See http://www.edintrain.com/location-and-accesibility/ for details on the location. The meeting room is kindly sponsored by Imagination Technologies.

The meeting will take place from 9:30 AM to 6 PM on both days, and a dinner will be planned on Saturday evening. The dinner will be paid by our sponsor Imagination Technologies, but the lunches will be paid by the participants themselves.

Sponsor: Imagination Technologies

ImaginationTechnologiesLogo.png

Imagination Technologies has kindly offered to sponsor the Buildroot Developers Meeting. They are offering to the Buildroot community the meeting room (with Internet connection, projector, and everything needed to let a team of open-source developers work efficiently) and the Saturday evening dinner.

Imagination Technologies is a global leader in multimedia, processor and communication technologies. The company creates and licenses market-leading IP solutions for graphics, video and vision, CPU/general purpose processing, multi-standard communications and connectivity, and cross-platform voice and video communications. Imagination's MIPS CPU cores and architectures range from solutions for ultra low-power 32-bit microcontrollers to high-performance 32/64-bit advanced applications and network processing. MIPS is supported by a broad ecosystem of tools and software including open source embedded Linux distributions like Buildroot.

Participants

  1. Thomas Petazzoni
  2. Yann E. MORIN
  3. Peter Korsgaard
  4. Markos Chandras
  5. Samuel Martin
  6. Arnout Vandecappelle
  7. Esben Haabendal (Saturday only)
  8. Thomas De Schampheleire (over Google Hangouts)
  9. Jesse Cobra (over Google Hangouts, Saturday afternoon)
  10. Ryan Barnett (over Google Hangouts, Saturday afternoon)
  11. Clayton Shotwell (over Google Hangouts, Saturday afternoon)
  12. Jérôme Pouiller (over Google Hangouts, Saturday afternoon)
  13. Luca Ceresoli (over Google Hangouts, Saturday afternoon)

Report of the discussion

This is the report of the discussion, that was noted down while the discussion took place. Conclusions are usually indicated in bold.

Google summer of code

The GSoC is organized in two phases. At the end of the first step, the mentor has to make an evaluation and decide if the student can continue or not. At the mid-term evaluation, Thomas was disappointed with the amount of work that Spenser was doing for the full-time payment that he receives, and especially with the lack of communication. Spenser promised that he would communicate more, but it turned out that he still did work for the university during the second phase. In the end, Thomas failed him in the final evaluation which means he received only half of the budget - Spenser agreed with this.

Thomas in the end has mixed feelings about the result. Spenser did good work on complicated things and certainly provided useful patches, and it certainly didn't take too much time from Thomas to mentor him. It certainly also got us a bit of exposure.

Spenser still has six boards. For the time being he can keep them, but if at some point Spenser becomes less active in buildroot he should send them back. Do we do something for next year? We certainly have good results from this project without much cost from us. We could even take two persons next year if Yann or someone else is willing to mentor a student. One of the two could be working on more infrastructural things like testing. We'll discuss concrete topics at the FOSDEM meeting. Some ideas: xbmc (as a PoC that you can build multimedia applications in buildroot), vlc.

Conclusions: it was a mixed success, and we'll do it again next year.

Community organization

There are more than 300 pending patches on patchwork, and the number tends to increase rather than go down.

There are very old patches that nobody really cares about. That's why ThomasP tried to push for handling the oldest patches. Probably most patches we can throw away because nobody is motivated to handle them. For instance firefox is a complex patch set and you can question its usefulness in buildroot (proven by the fact that nobody picks it up). The bi-weekly list of the ten oldest patches is a good way to address this.

There is now not a good workflow for other people than Peter to take patches in a branch and send a pull request to Peter. If you send a mail saying that you'll pick it up Peter uses that information (i.e. doesn't look at the patch again). You can also change the state in patchwork, but there is no good state to indicate that it has moved to someone else's branch. There is the 'delegated to' field that you can use.

The Acked-by tags are currently not enough to force Peter to commit a patch, mainly because Peter may have missed the mail. A solution would be that patchwork and pwclient are updated to show which patches have received an acked-by - Peter uses pwclient quite a lot for maintaining patches.

Yann has to contact the patchwork guys about getting visual feedback about Ack/Rev/Tst-by status of a patch (both Web GUI and pwclient) and update pwclient to be able to set the 'delegate' person.

In Thomas's opinion, for new packages we can be more aggressive about applying them: it won't break anybody's build because it's new, and autobuilders will detect if anything's wrong with it. For version bumps, we can still apply them aggressively early in the release cycle because the autobuilders will pick it up. There is a risk that it will break things for someone, but if it gets a tested-by tag it can be commited - like the qt5 version bump. We also should consider that we can see that the contributor is active so we can trust him to fix things if it breaks.

One reason why we have the patch queue is getting longer and longer is that we keep on getting more and more patches.

Markos says that there is a way to put patchwork references in the commit message, and set up a receive hook that updates patchwork when you make a commit. This way, somebody else can create a patch series that contains these tags and Peter can just pull the series and patchwork will be updated automatically. Or he can use the current script and use the patchwork-id in the commit message instead of the hash (which is not stable because if Peter makes any change).

What if you have a local tree with some of your own patches and some patches from other people? We should distinguish two types of branches. Contribution branches contain your own work (and perhaps also patches from others) - these should be sent through the list in the normal way, not with a pull request. Commit branches contain patches from others that you want to ack (and maybe contain patches by yourself with acks from others) - these you send as pull requests. In the commit branch, you can make small changes to the original patch (e.g. coding style related), but larger changes should go through the list again from a contribution branch. (Does this have to be a pull request? This requires people to have a public git tree somewhere... Why not send these as standard patches with some annotation?)

In the end it was decided that sending pull requests is not so efficient after all. So instead the commit branch should still be sent to the list in the normal way. Peter applies it from his mailer and the push script will make sure that patchwork is updated. Make sure that this commit branch cover letter clearly indicates that these patches are ready to be committed. Of course, make sure acks are included.

The people who make ready-to-commit branches need admin access on patchwork to be able to mark patches as superseded. Since the ready-to-commit branches will anyway only be taken by people if it comes from someone he trusts, granting admin access to that person shouldn't be a problem. We'll try this approach now and evaluate at the FOSDEM meeting.

For the more complicated patch sets, there is sometimes the problem that there is some discussion but no convergence and it just dies out. It would be good if Peter would then step in and just force a decision. And if it has to be discussed at a developer meeting, it should be said explicitly. We can also ping Peter to take a decision.

A checkpatch/checkpackage script would be useful to cover the low-hanging fruit issues with patches. This should be a checkpackage script, but ideally it should also be able to take a patch and detect which files have to be checked. We first of all want to check packages (Config.in, pkg.mk, patch naming and signed-off-by). We can ask the new contributors on the list if anybody is willing to start implementing this.

The number of mails on the list: it's about 150 e-mails a day, so we suspect that people who are just using buildroot are scared off from the list. Hence the idea of splitting the list. Putting the git commits on a separate mailing list is certainly a possibility, but is going to remove only 300 of the 2000 mails per month. We don't see a good way to split the lists at all, because a basic user is very quickly a basic developer and forcing them to see the patches is probably good. Also having to subscribe to such a big list is a big threshold. But opening it to non-subscribers is not a good idea.

Thomas calls for buildroot contributors to join stackoverflow and subscribe to the buildroot channel, and vote down wrong answers. Not to recommend stackoverflow as a channel, but at least make it less bad.

We don't have a real primary support channel. We like IRC or the mailing list, but neither is really ideal for newcomers. The website doesn't really point out very well where people can get help. It is also missing a search box - we could use google site search, but then we should make sure that the mailing list archives appear on it and they are on lists.busybox.net, not *.buildroot.{net,org}. We don't really know what we can do to make a difference.

Conclusions:

  • Maintainers should take patches in ready-to-commit branches and send them out to the list, and update patchwork to supersede the patches you included.
  • Peter should force decisions on major changes when he is asked to.
  • Yann will add a feature to patchwork that makes Acked-by tags visible in the front-end web and pwclient interface.
  • We will continue the weeding of old patches with bi-weekly request for update.
  • We will try to get a "new contributor" to work on a checkpackage script

Internal toolchain backend status

The question is: should we try to re-add support for crosstool-NG. The problem is that crosstool-NG is really not written to be part of another build system. Now the internal toolchain has been converted to the package infrastructure, it is easier to maintain (as can be seen from the fact that now there have been some changes to the internal toolchain). And now we also have glibc/eglibc support. SSP is currently problematic for the internal toolchain: it is not very well supported or tested, and it stops e.g. the two-stage toolchain build. Note that also SuperH is currently still broken because it always a multilib toolchain, and also the kernel always wants the m4-nofpu variant instead of m4 variant.

We can probably remove the experimental tag from (e)glibc.

Thomas has patches to add musl support in the internal backend. It needs a patch against gcc. The next task is a two-stage build for gcc (which was posted earlier but didn't work correctly).

We have an issue that our internal uClibc deviates more and more from the external uClibc toolchains, which makes it more and more difficult to properly support uClibc in packages. (This is why Thomas is pushing back on removing patches that are no longer needed because our uClibc has patches to add the feature). It will get better if uClibc finally releases a new version, but even then it will take time for the external toolchains to pick it up. It would be nice if we could just deprecate uClibc, but there is simply no alternative, especially for noMMU platforms. Thomas thinks it's time for someone to fork uClibc and get it moving again. We could also remove the officially supported external toolchains with uClibc (except for blackfin but that one works pretty well). We would really like to keep support for NOMMU platforms, but unfortunately we don't see much contributions from that corner.

Which brings us to the point of avr32: when can we remove it? Simon Dawson is still using it. We could add more and more exceptions to the autobuilders. It doesn't run very much on the autobuilders anyway because there are much more ARM autobuild configs and just a single avr32 autobuild config. Or we could remove avr32 from the autobuilders. Or we could try sending failure reports to Simon (if he agrees). This is what we will do. If Simon doesn't agree, then we'll remove it from the autobuilders and mark it as deprecated.


BR2_EXTERNAL

The first question is: do we really want something like BR2_EXTERNAL? It's clear that there are people who will use it, though Thomas and Peter won't. The proposal of Thomas supports external packages (Config.in and pkg.mk), external defconfigs, and external board directories (though that was already possible).

A limitation at the moment is that the BR2_EXTERNAL has to be passed in the environment or command line every time. If you forget it, then the .config values for the external packages will be forgotten. A possibility could be to put a Makefile in the external directory that calls into buildroot with the correct environment. We would also add it to the generated Makefile in the output directory. And just to make sure things are consistent, the default output directory would change to the output directory in the external directory. An alternative is to store the BR2_EXTERNAL value in a new file in the output directory and include it in the top-level Makefile. The first time you give it in the environment, then the .external file is created in the output directory. On later runs, you can still override it in the environment/command line.

There is still the issue that a defconfig doesn't contain a reference to the external tree. This could be solved by adding it manually in the savedefconfig target, but probably this is overkill.

The biggest reason to have BR2_EXTERNAL is to be able for a company to adapt buildroot to their needs, especially if they have a non-git VCS. You really want to make it visible what comes from upstream and what are the company changes, and if you pull patches from buildroot and put them on top of the svn tree then it looks as if they come for yourself. The external mechanism also doesn't make it harder for companies to upstream stuff. We could encourage users to keep their proprietary packages and boards in the external tree, but do changes to open source projects directly in the buildroot tree. For Rockwell-Colins, their approach will eventually be to treat buildroot as a COTS component that is imported unchanged (probably a release version) - that fits much better with the QA workflow of the company.

It has some overlap with the local.mk stuff. It's a bit different, because it gets included before the packages while the external stuff gets included after the packages (or at least the order is undefined). It is sometimes used to fake external trees, but this use case will continue to work. Note that there is a section missing in the manual to explain how to do this. There was some talk about deprecating local.mk because it just gives too much freedom. But unless there's a real reason to remove it, we'll just keep it around.

What about multiple external trees? This would be useful for e.g. separating changes to open source packages from proprietary packages. After some discussion the conclusion was that this can easily be done by creating subdirectories in your external tree.

Regarding the directory hierarchy in the external tree, it was agreed that it is a good idea to force three subdirectories: package, board, configs. Buildroot's package/Config.in will source $BR2_EXTERNAL/package/Config.in.

What Ryan is still missing is a way to make a release. They have to do some extra things when making a release, like tarring the sysroot. These things are to be shared across the different projects in the company. Right now each project has to make their own post-image script that does the same. Thomas indicated however that it is possible to set some environment variables (but this possibility should be added to the documentation), these will be visible to the post-image script and they can act accordingly. Also, the post-image script can be shared across the company, or call a shared script. @Ryan - we have since taken the information that we learned from this meeting and come up with a solution to this problem. I (Ryan) will send out an email sometime in the near future that describes our solution to the problem.

Buildroot.config

Yann started this project to store demonstration defconfigs that show e.g. video acceleration on the RPi. We don't normally store these in buildroot. A second reason was that for these configs you need to be able to generate ready-to-flash images, so you need additional scripts to assemble kernel, rootfs, bootloader into an image. Plus there may be a need for additional fixups. So there is a set of scripts to do this post-processing work - these could actually become part of buildroot, and Yann would upstream it if it was of sufficient quality. They are built out of fragments that are combined by a super-script. If BR2_EXTERNAL would exist, then it could be used with that.

Another thing that Yann added to buildroot.config is brsh. This is a convenience script on top of buildroot that makes it easier for Yann to manage things. It's a bash shell with a few functions: initialize a project (clone buildroot, run make foo_defconfig; make).

Another feature in buildroot.config is the concept of a project: the post-build and post-image scripts assume a specific structure in the board directory, with a hook file that contains the instructions about what needs to be done.

So in the end there are three things. First, the scripts to generate SD card images, they should be included in buildroot. Then there are the bloaty defconfigs that we currently don't want in buildroot. Unless if we change our policy of minimal defconfigs. Another reason why we may want to do that is for test configurations if we do (semi)automatic testing on the target. Can we include these in buildroot? Another alternative is to keep them externally, e.g. in buildroot.config or on a wiki page. It is important for sure that this kind of information is somewhere available - e.g. during GSoC, when Thomas wanted to test one of the packages created by Spenser, he had to find out himself what he needed to do on the board to make it work. A problem with demo configs is that it is difficult to put a boundary. E.g. for the RPi, do we include a qt5 demo application that shows all the acceleration, or is it enough to just have command-line GStreamer. If these defconfigs are out of tree, then there is a problem with keeping them in sync (which BR version did this apply to?). There are two reasons to have the demo: 1. users have to find out what exactly they have to enable to get hardware acceleration on their RPi; 2. users may not even realize that you can get this acceleration with buildroot. What would the demo defconfigs be? Enabling graphics acceleration is one, how to use SELinux is another example - but that one has the disadvantage that you randomly choose a specific architecture for this demo. Proposal is to put the demo defconfigs in a new demo/ directory, and supporting files still under board/. For demos this may be OK, but for test cases it's not so nice to clutter the board directory for these test cases.

We will start with some solution, and see later if we change our minds and put it somewhere else. We will start with putting the demos in the configs directory, and they will be named <board>_demo_videoaccel_defconfig for instance. Test cases will be put in support/test and you build them with 'make BR2_DEFCONFIG=support/test/foo/defconfig defconfig'. Any supporting files are put under support/test/foo. Or even a better idea: give support/test the structure of a BR2_EXTERNAL, so we can also add test packages and have a nice way of working. And it is also a nice in-tree demonstration of how BR2_EXTERNAL works. This will also make it easier to add test cases for things like the local site method.

Yann will upstream the scripts to create an SD card image for RPi and update the rpi_defconfig to use them. Yann will also separate submit a demo defconfig. We'll give feedback on the list about how features should be split between the minimal defconfig and the demo defconfig.

Quilt

Ryan raised the problem of how to create/maintain package patches. Some time ago the idea was raised to use quilt instead of patch in apply-patches.sh. Another option is to initialize a git repository in the build directory, but if you do that on every build,however, it creates quite a bit of overhead. Yet another option is to add a target foo-git or something like that re-extracts the tarball in a different place, applies the patches with git, and sets OVERRIDE_DIR.

However, there are no volunteers for doing this right away.

Config.in.legacy

The issue is that the current way that legacy options are automatically converted into new options is not so transparent: if you just run 'make menuconfig' and disable the legacy options, then the new options will _not_ be selected, while if you run 'make oldconfig; make menuconfig' they will be. The alternative is to completely remove the feature of automatically selecting the new option, and just tell the user to do it. The problem with this is that any other option that depends on the legacy option will be unselected by Kconfig, so you loose part of your old config. E.g. if python is renamed to python2, and you previously had selected python and python-nfc, then now you will have this legacy option telling you to select python2 but when you select python2, python-nfc will no longer be selected.

So a better way would be to preprocess the .config and replace legacy option names with the new option name.This will always work, because we anyway only have this automatic thing when there is really a one-to-one mapping between the old and new symbol. In terms of maintainability, this is the same as maintaining Config.in.legacy. We don't ever have to remove the sed line, so we don't have to worry about having some very very old defconfig that doesn't work anymore. Using the sed has the advantage that we can use it to get rid of BR2 as well.

Conclusion: we go for the sed approach if someone implements it; for the time being we commit ThomasDS's hg patches.

legal-info / SPDX

Arnout introduces SPDX, it's a format to store legal info. It has a header, information about the package (version, who created the SPDX file, a tag saying whether it was automatically created or not, etc.), then a file manifest with all the files in the package. For every file, this manifest specifies the license of that file and other relevant info about the file (copyright holder, a SHA of the file). The overall information also contains the global SHA of the package (the SPDX file excluded). The final section contains the concluded license of the overall package (and the declared license, as given by the upstream package author, which may or may not match the concluded license). There are some tools to manipulate SPDX files, in various formats (JSON, XML, etc.).

Fossology is a tool that analyzes the license of files, by doing pattern matching on the files, trying to find common indications of licenses. It used to generate a manifest in a special text format, but the tool was more recently updated to generate a SPDX file. Yocto uses it now: after patching the package, sends it the fossology server, waits for the answer (which can take a long time), and stores it.

Fossology allows you to upload a tarball, and then Fossology will analyze it and provide a license report in the form of a SPDX file. There is a public Fossology server, but it's open-source, anyone can install it, except that it's huge and very performance consuming.

SPDX is a standard format to exchange informations about licenses, you can give them to lawyers, who can do license analysis. In Buildroot, two possible directions around SPDX:

  • Buildroot could at least provide its own SPDX file, to declare the license of the different files in Buildroot
  • Buildroot could query the Fossology server for licensing informations for each package, and store the SPDX results as part of the legal-info.

To provide a SPDX file for Buildroot itself, it's difficult: we would have to know the license of each of our files. For the Config.in and .mk files, it's easy, but for the patches that we apply on packages, their license is the license of the package on which they apply. Since it's already a difficult problem to collect these licenses in a reliable way, generating reliable SPDX info for Buildroot seems hard.

U-Boot is one project that is being converted to use SPDX properly, by storing SPDX tags in the source files. It doesn't include an SPDX file (yet), however. General feeling is that it seems really complicated to do, and SPDX isn't widely used yet, so the benefits aren't clear.

One point raised is that Fossology is almost always not able to determine the license for *all* files, so for all packages, the license returned is always something like "Some GPL, some BSD, some other license, and Unknown license".

One more thing is that SPDX has standardised license "acronyms", and of course, it doesn't match what we are using. Question is whether we should base our <pkg>_LICENSE metadata on these SPDX standardised license acronyms. Since there's no immediate need to be compatible with SPDX license names, we don't need to migrate to those names. And because the Buildroot current names and SPDX names is most likely a 1:1 mapping, we can always migrate later one with a bit of sed usage. However, for new packages that have a license that isn't yet in the list of license names used by Buildroot, using the SPDX license name may be a good idea.

Buildroot instrumentation

Yann has a pending patch that instruments every step (extract, configure, build, install) with a pre- and post- hook that is run with the package as argument. This allows you to do things like measuring the time it takes to execute the step, or evaluating the impact of the step on the rootfs size, or detecting which files in the target were affected by the step. Thomas has done something like this before, but not by adding a generic hook, instead just hacking the pkg-generic.mk. Yann's patch set also adds this call before and after the hooks, but this is considered overkill by the others. Adding this stuff to the existing hooks is a bad idea because it would add hundreds of new variables and slow things down considerable. So instead the commands should be adapted. Yann will adapt and re-send the patches.

Parallel top-level make

The patches that clean up the makefiles so it has proper dependencies are certainly acceptable. However, the final patch that removes .NOTPARALLEL is not acceptable. Instead we could put an explanation on top of it that explains why it is needed (not reproducible and may even fail because of missing explicit dependencies in buildroot). Users who are really stupid can then still do a parallel build. We discussed various wild ideas of how we can detect missing dependencies or make sure that it really works (by doing a per-package sysroot, but the overhead of that is probably larger than the gain of parallel build). But nothing we really want to implement.

Bottom line: we accept Fabio's patches except the last one that removes .NOTPARALLEL.

Topics to hack on (Sunday)

  • help triage the pending patches
  • look at some autobuild problems
  • instrumentation of build steps
  • update TODO list at Buildroot#Todo_list (some items have already been handled); add links to the mail archive so people can figure out what the todo means.
  • ...