BeagleBoard/GSoC/2021 Proposal/GPGPU with GLES

Jump to: navigation, search


About Student: Jakub Duchniewicz
Mentors: Hunyue Yau
Code: not yet created!
GSoC: [1]


Discussing the implementation ideas with Hunyue Yau and others on #beagle-gsoc IRC.

About you

IRC: jduchniewicz
Github: JDuchniewicz
School: University of Turku/KTH Royal Institute of Technology
Country: Finland/Sweden/Poland
Primary language: Polish
Typical work hours: 8AM-5PM CET
Previous GSoC participation: Participating in GSoC, especially with BeagleBoard would further develop my software and hardware skills and help me apply my current knowledge for the mutual benefit of the open source community. I planned to do the YOLO project, but after spending several days researching and preparing the [2] I found it is impossible to do on current BBAI/X15, hence I want to do another way of heterogeneous computing with GPGPU!

About your project

Project name: GPGPU with GLES computing framework


Since BeagleBoards are heterogeneous platforms, why not use them to their full extent? Apparently for the time being the GPU block lies mostly useless and cannot assist with any computations due to non-technical reasons. There exists a library made by the IP manufacturer but is not open source, and thus does not fit into the BB's philosophy of open SW/HW. The IP manufacturer(Imagination Tech) has even a series of teaching lectures of the subject, available [3]. Normally, OpenCL would be used for accelerating on the GPU, but taking the limitations discussed above it is impossible, yet there is another way! GPGPU acceleration with OpenGLES.

The GPU accelerators from PowerVR (series SGX500) are available in all BB's and a unified acceleration framework for them is the topic of this proposal. The targeted version of OpenGL ES is 2.0, but extending it to the future versions will not be a problem as the newer versions are backwards compatible. Moreover, it could open a way for new ways of computation acceleration, such as integer and 32-bit floating-point operations. The API of the library will allow for performing various mathematical operations (limited to a few for the GSoC part of the project), and will return the results to the user in the same format they input the data in.

Apart from the library, the project will explore the performance gains when using the GPU (compared to the CPU) and provide the user with guidelines on when it is feasible to put the BB's GPU accelerator to use and when to refrain from it. This could be a good resource for a training session as mentioned later in the proposal.


The system will comprise two layers: the API layer and the implementation layer. The API will be a set of extensible user calls for accelerating various functions, while the implementation will be the part responsible for mapping the user data to the GPU via a shader program.

The overview of the application in the environment.

The picture above presents the main components of the system.

The implementation is described along the user flow below.

  1. User creates the data buffers holding the data to be processed and calls relevant API
  2. The EGL context is established and a shader program is attached
  3. The data is translated and loaded to the GPU as a texture
  4. OpenGL renders the frame
  5. The data is read out and translated to the form chosen by the user
  6. The context is closed and cleaned up
  7. The data is returned to the user

The library should allow for both single-shot computations and reusing the established GL context and feeding the shader with new data every frame. The alternative flow (with continuous flow of the data), establishes relevant contexts and asks only for the user input each frame.

Alternative ideas/Stretch goals

  • There are two GPU's on BBAI and BBX15 - the library may be extended to make use of both of them and work in a streaming fashion
  • Incorporating the DSPs into the acceleration by means of OpenCL
  • Or making this part of the OpenCL acceleration scheme once it is proven to be stable and performant
  • Create a Rust version of this project :)

API Overview

The API will be a C++ class, which will be accessible in a twofold way:

  • a single-shot calls
  • continuous data processing

The skeleton of the class and some mock functions are presented below:

 1 class OpenGLESAPI
 2 {
 3 public:
 4 	OpenGLESAPI();
 5 	~OpenGLESAPI();
 7 	// for now specialized types in functions
 8 	std::unique_ptr<int[]> arrayAddition(std::unique_ptr<int[]> a1, std::unique_ptr<int[]> a2); //returns the result 
 9 	std::unique_ptr<int[]> firConvolution(std::unique_ptr<int[]> data, std::unique_ptr<int[]> kernel);
10 	std::unique_ptr<int[]> matrixMultiplication(std::unique_ptr<int[]> a, std::unique_ptr<int[]> b, int size); //matrices in a flattened array?
12 	static std::unique_ptr<int[]> arrayAddition(std::unique_ptr<int[]> a1, std::unique_ptr<int[]> a2);
13 	static std::unique_ptr<int[]> firConvolution(std::unique_ptr<int[]> data, std::unique_ptr<int[]> kernel);
14 	static std::unique_ptr<int[]> matrixMultiplication(std::unique_ptr<int[]> a, std::unique_ptr<int[]> b, int size);
16 private:
17 	std::unique_ptr<GLHelper> helper;
18 }

And the helper class for GL operations:

 1 class GLHelper 
 2 {
 3 public:
 4 	GLHelper();
 5 	~GLHelper();
 6 	void loadData();
 7 	void compute();
 8 	std::unique_ptr<usigned char[]> downloadData();
 9 	void reconfigure(//reconig parameters);
11 private:
12 	GLint ESShaderProgram;
13 	GLbyte* FShaderSource; // we load the specified fragment shader for calculations (matching the current math operation)
14 	GLbyte VShaderSource[STATIC_SIZE]; //assuming we only leverage fragment shader for calculations, vertex one has static size
15 	EGLDisplay display;
16 	EGLSurface surface;
17 	EGLint config;
18 	EGLContext context;	
19 }

The main difference between these two calls will be, that one will be a static call, creating all the necessary bindings with the help of the GL operations class and then cleaning up. The other one will use a persistent main API class and perform the operations without allocating and deallocating for each call.

It would be nice to be able to have some templatized versions of these functions, and not overload them for each datatype. This is another idea for extension. Also having functions for performing arbitrary math operations, such as quick rescaling or some DL related ones, like ReLU would be welcome.

Detailed Implementation

After the API is called, two things happen:

  • the context is either set up or reused
  • the data is transformed from user's format and mapped to the texture (assuming we use a texture)

The first thing is a matter of setting up all the necessary boilerplate code (which has been done by Hunyue in a small PoC), loading the vertex and fragment shaders and creating the GL ES program to be run on the GPU. I assume the computations are done in the fragment shader and vertex shader is constant for the application. The vertex shaders will be written for various computations and available to be loaded as either binary blobs or inline code.

The data transformation has to be done from whatever the format user supplied to floating point values in range from 0.0 to 1.0, because the shaders in OpenGL ES 2.0 operate only on floats. // Please check this Also the data has to be mapped to a texture (if we use a texture for the computation).

Expected Performance

The performance of the GPU is historically known to be better with big data blocks and scarce transfers rather than small blocks and many transfers. Similarly, in this case the goal is to achieve good performance on large data blocks and focus on minimizing the transfer overhead.

Expected result is a non-stalling experience of using the API and processing frames. The user can normally use all the CPU resources during the frame processing, but the GPU(s) are blocked.

Action Items

  • setup the environment
  • setup SGX and run simple shaders
  • write the array addition
  • write the FIR convolution
  • write the matrix multiplication
  • write broadcast math operations
  • benchmark the functions with various sizes
  • write the examples
  • document
  • record video 1 + 2
  • write up about the process on my blog
  • polish the API and make it extensible


During 25.07.21-08.08.21 I have a summer camp from my study programme and will be probably occupied for a half of the day. The camp will most likely be held online though.

Date Milestone Action Items
13.04.21-17.05.21 Pre-work
  • Expand knowledge OpenGL ES
  • Run the OpenGL computation acceleration on regular PC
18.05.21-07.06.21 Community Bonding
  • Familiarize myself with the community
  • Experiment with BB
14.06.21 Milestone #1
  • Introductory YouTube video
  • Setup the environment
  • Write first blog entry - "Part 1: Game plan"
21.06.21 Milestone #2
  • setup SGX and run simple shaders
  • write the array addition
28.06.21 Milestone #3
  • write the FIR convolution
  • write the matrix multiplication
5.07.21 Milestone #4
  • write broadcast math operations
  • benchmark the functions with various sizes
12.07.21 Milestone #5
  • Write second blog post - "Part 2: Implementation"
  • Evaluate the mentor
19.07.21 Milestone #6
  • document everything
26.07.21 Milestone #7
  • Summer camp, mostly collect feedback for the project
  • Less brain-intensive tasks, documentation, benchmarking
31.07.21 Milestone #8
  • Write third blog post - "Part 3: Optimization and Benchmarks"
  • Polish the implementation
7.8.21 Milestone #9
  • Polish the API and make it extensible
  • Prepare materials for the video
  • Start writing up the summary
14.8.21 Milestone #10
  • Finish the project summary
  • Final YouTube video
24.08.21 Feedback time
  • Complete feedback form for the mentor
31.08.21 Results announced
  • Celebrate the ending and rejoice

k proved it is possible to accelerate all layers of the YOLOv3 (what about upsample??) as shown in the paper.

Experience and approach

I have strong programming background in the area of embedded Linux/operating systems as a Junior Software Engineer in Samsung Electronics during December 2017-March 2020. Additionally I have developed a game engine (| PolyEngine) in C++ during this time and gave some talks on modern C++ during my time as a Vice-President of Game Development Student Group "Polygon".

Apart from that, I have completed my Bachelors degree at Warsaw University of Technology successfully defending my thesis titled: | FPGA Based Hardware Accelerator for Musical Synthesis for Linux System. In this system I created a polyphonic musical synthesizer capable of producing various waveforms in Verilog code and deployed it on a De0 Nano SoC FPGA. Additionally I wrote two kernel drivers - one encompassed ALSA sound device and was responsible for proper synchronization of DMA transfers.

I am familiar with Deep Learning concepts and basics of Computer Vision. During my studies at UTU I achieved the maximal grades for my subjects, excelling at Navigation Systems for Robotics and Hardware accelerators for AI.

I have some experience working with OpenGL, mostly learning it for the programming engine needs and for personal benefit. Since this project does not require in-depth knowledge of it, but rather to create an abstraction over the OpenGL ES bindings and perform the necessary data conversions and extractions inside the API. This requires skills I do already possess and have proficiency using.

In my professional work, many times I had to complete various tasks under time pressure and choose the proper task scoping. Basing on this experience I believe that this task is deliverable in the mentioned time-frame.


Since I am used to tackling seemingly insurmountable challenges, I will first of all keep calm and try to come up with alternative approach if I get stuck along the way. The internet is a vast ocean of knowledge and time and again I received help from benevolent strangers from reddit or other forums. Since I believe that humans are species, which solve problems in the best way collaboratively, I will contact #beagle, #beagle-gsoc and relevant subreddits (I received tremendous help on /r/FPGA, /r/embedded and /r/askelectronics in the past).

If all fails I may be able be forced to change my approach and backtrack, but this will not be a big problem, because the knowledge won't be lost and it will only make my future approaches better. Alternatively, I can focus on documenting my progress in a form of blogposts and videos while waiting for my mentor to come back to cyberspace.

With this project I see less problems than with the YOLO one, mainly because the track has been explored by Hunyue and because the OpenGL ES is a much wider used framework than TIDL. However, there may arise problems such as: getting the SGX to work with current BB image (as the SGX support is known to have been tricky in the past). The schedule of this year's GSoC is also much tighter so prioritizing things must be done right. After all, many things in this project can be stretch goals (especially algorithm implementations).

The resources used for this proposal, and for contingency:


As the developers use BB's for computationally intensive tasks as DL inference or ML calculations, having a library which will allow them for offloading part of their computations to yet unused parts of the board would be beneficial. This way, the work may be parallelized and the BeagleBoard is a truly heterogeneous platform, allowing for using all of its components!

Educating the developers when to use and when to not use the GPU is also a valuable gain for the community, as not all BB developers are familiar with the limitations of the CPU calculations. This could be a tempting topic for an online training/live Q&A session. Having a reusable library, abstracting the low-level details of OpenGLES would reduce the hassle the programmers have to go to achieve acceleration.

Due to the same non technical issues, the GPU is rarely if ever used on the BeagleBoard. Support for current graphics output has been abysmal. This means we have an IP block that can do computations sitting idle. Even if it is not extremely fast, this can offload the ARM CPU making the Beagle's a truely asymetric multiprocessor SoC. 

~ Hunyue Yau - ds2


The qualification PR is available here.