Difference between revisions of "BeagleBoard/GSoC/BeagleBoard-GPUBx"

From eLinux.org
Jump to: navigation, search
(BeagleBoard-GPUBx)
(Benefit)
 
(12 intermediate revisions by the same user not shown)
Line 33: Line 33:
  
 
==About your project==
 
==About your project==
''Project name'': BeagleBoard GPUBx - A benchmarking tool for the BeagleBoard GPU<br>
+
''Project name'': BeagleBoard GPUBx - A benchmarking tool for the BeagleBoard GPU, along with a shader editor<br>
  
 
===Description===
 
===Description===
 
===Overview===
 
===Overview===
BeagleBoard® contains an onboard GPU, capable of 2D/3D rendering, which can be manipulated through OpenGL ES 2.0. Offloading complex instructions onto the GPU from the CPU is evidently advantageous, since presence of multiple cores in the GPU, all working in parallel, would result in a much faster computation, as compared to the CPU.  <br>
+
BeagleBoard® Blue/Black contains an onboard PowerVR GPU, capable of 2D/3D rendering, which can be manipulated through OpenGL ES 2.0. Offloading complex instructions onto the GPU from the CPU is evidently advantageous, since presence of multiple cores in the GPU, all working in parallel, would result in a much faster computation, as compared to the CPU.  <br>
 
So, through this project, using GLSL along with OpenGL ES, we develop a benchmarking suite of programs which make use of custom shaders which provide intensive computations to the GPU, at the end of which we compute how efficient the GPU was, and in turn benchmark the GPU. <br>
 
So, through this project, using GLSL along with OpenGL ES, we develop a benchmarking suite of programs which make use of custom shaders which provide intensive computations to the GPU, at the end of which we compute how efficient the GPU was, and in turn benchmark the GPU. <br>
 
'''The GPU will be tested for various parameters, with programs like:''' <br>
 
'''The GPU will be tested for various parameters, with programs like:''' <br>
Line 48: Line 48:
 
* '''Post-processing effects''', such as Bloom, HDR, Depth of Field (DoF), and other advanced lighting effects. <br>
 
* '''Post-processing effects''', such as Bloom, HDR, Depth of Field (DoF), and other advanced lighting effects. <br>
  
 +
These programs consitute the benchmarking tool, which is accompanied by a '''shader editor''', wherein the users can easily load their shader code, and execute them on the go.
 
'''Languages used:''' C++ along with OpenGL ES  2.0, and OpenGL Shading Language (GLSL).
 
'''Languages used:''' C++ along with OpenGL ES  2.0, and OpenGL Shading Language (GLSL).
  
Line 91: Line 92:
 
'''Perlin's''' method of noise creation is considered to be one of the best noise functions for the purpose of terrain creation, and is widely used in almost every graphics related application, or games. Below is a snippet of generating the heightmap for each of the points of the terrain:  
 
'''Perlin's''' method of noise creation is considered to be one of the best noise functions for the purpose of terrain creation, and is widely used in almost every graphics related application, or games. Below is a snippet of generating the heightmap for each of the points of the terrain:  
 
[[File:noise.png|800px|none|left]]
 
[[File:noise.png|800px|none|left]]
Here, '''hash()''' function returns basically a suitable floating point value on being passed the coordinate position of a point, which is then used by the main '''noise()''' generating function, which basically performs computations (mixes the hash values of the addition of a floating point number which was obtained using the passed parameter with a transformation matrix, and another variable obtained from the passed variable) on the parameter ''''p'''' passed to it. This value of noise is used in the '''generateHeight()''' function, which adds this value to it's y-coordinate, hence we see an elevation in terrain. There are also scale factors which can be tweaked to obtain different terrain amplitude.
+
Here, '''hash()''' function returns basically a suitable floating point value on being passed the coordinate position of a point, which is then used by the main '''noise()''' generating function, which basically performs computations (mixes the hash values of the addition of a floating point number which was obtained using the passed parameter with a transformation matrix, and another variable obtained from the passed variable) on the parameter ''''p'''' passed to it. This value of noise is used in the '''generateHeight()''' function, which adds this value to it's y-coordinate, hence we see an elevation in terrain. There are also scale factors which can be tweaked to obtain different terrain amplitude. For references, I am using the book OpenGL Shading Language, https://thebookofshaders.com and https://shadertoy.com<br>
  
 +
3) '''Ocean shaders:''' Ocean shaders are similar in construction as the procedural terrain generating shader. Here as well me make use of the noise, but we make it progressive, so as to generate the wave effect. Again, we can change the amplitude of the noise in order to change the amplitude of the wave. For reference, I am using https://shadertoy.com <br>
 +
 +
4) '''Post-Processing effects:''' As the name suggests, these shaders are executed "post" rendering. Generally, the rendering data is stored in buffer and then rendered on the screen. Now after it is rendered, we can tweak it using custom shaders to add new properties to it. Post-processing effects are generally used to create life-like visuals/effects in games/graphical applications. Some of the most common post-processing effects include:
 +
* Bloom
 +
* HDR
 +
* Motion Blur
 +
Post-processing effects are generally GPU intensive when done on a medium to large scale since it goes through a second render pass, and hence does more amount work in a frame than usual (without post-processing). <br>
 +
''Now, our aim is to test the performance of the onboard PowerVR GPU against these shaders.''<br>
 +
I) '''Bloom:''' Bright light sources and brightly lit regions are often difficult to convey to the viewer as the intensity range of a monitor is limited. One way to distinguish bright light sources on a monitor is by making them glow, their light bleeds around the light source. This effectively gives the viewer the illusion these light sources or bright regions are intensily bright. This effect which makes the light bleed along the fringes is called Bloom. <br>
 +
The procedure of performing a bloom effect, is to first create an HDR texture of all the light sources in the scene whose intensity exceed a certain threshold. Now, we perform computations on these light sources, and add blur to them. The resulting texture, is then added on top of the original texture in the scene, hence giving a bloom effect. <br> <br>
 +
 +
II) '''HDR:''' HDR stands for 'High Dynamic Range'. Monitors are limited to display colors in the range of 0.0 and 1.0, but there is no such limitation in lighting equations. By allowing fragment colors to exceed 1.0 we have a much higher range of color values available to work in. With high dynamic range bright things can be really bright, dark things can be really dark, and details can be seen in both. <br> Generally, the fragment color is stored in normal frame buffers, wherein the intensity of colors is clamped between [0.0, 1.0]. But, instead if we use ''floating-point framebuffers'' the values aren't immediately clamped, and hence these can be used in order to implement HDR.
 +
 +
=== Shader Editor ===
 +
Along with the sample programs for testing the efficiency of the GPU, the user will also be provided with a shader editor, wherein the user can load his/her own vertex/fragment shaders, and execute them on the go. This is helpful for faster debugging, hence improving efficiency. I have created the base code for the application wrapper which currently has an independent 'Shader' class which opens up a fragment and vertex shader based on the FILENAME provided in the argument. This can be easily modified to load the shaders using GUI buttons.
 
===Timeline===
 
===Timeline===
 
Provide a development timeline with a milestone each of the 11 weeks. (A realistic timeline is critical to our selection process.)
 
Provide a development timeline with a milestone each of the 11 weeks. (A realistic timeline is critical to our selection process.)
 +
'''Community Bonding period:'''
 +
* Get in touch with the mentors, and clear queries regarding the project, and get an insight of what is expected.
 +
* Learn about the BeagleBoard Black/Blue, about it's GPU and it's limitations and base my research around that. Check OpenGL ES compatibilty with BeagleBoard.
 +
'''2017-06-06:'''
 +
* Start with the programs regarding mathematical computations (Linear Algebra and FFT). Refer to CLRS Algorithms book to learn about efficient ways to generate FFT.
 +
* Start developing main application which holds the entire benchmarking suite.
 +
* Start working on the GUI of shader editor.
 +
<br>
 +
'''2017-06-13:'''
 +
* Talk with the mentors regarding the programs developed in week 1, and get a review and check whether the build is passing or not.
 +
* Fix the bugs which are known or pointed out by the mentors, and make necessary changes as per their review.
 +
* Develop the main benchmarking suite application such that it loads the shaders from arguments passed.
 +
* Shader editor's GUI should be made more robust.
 +
<br>
 +
'''2017-06-20:'''
 +
* Start researching about dynamic fractals, and include a shader generating a Mandelbrot and Julia fractal.
 +
* Integrate these shaders into the main benchmarking suite, and check build status.
 +
* Generate unit tests for each of the programs, and the suite as well.
 +
* Learn about opening a file explorer to open/load the shaders into the editor.
 +
<br>
 +
'''2017-06-27:'''
 +
* Talk to the mentors, and get their review
 +
* Fix the bugs as pointed out by the mentors.
 +
* Search for more ideas about shaders which could be incorporated.
 +
<br>
 +
'''2017-07-04:'''
 +
* Shader editor's file picking should be completely implemented, and learn about storing it in a buffer.
 +
* Start working on the noise-based shaders - terrain and ocean shaders, and find more interesting variants about the same.
 +
* Integrate these shaders with the main application
 +
* Shader editor should now be able to load the shaders properly into the editor from the buffer.
 +
<br>
 +
'''2017-07-11:'''
 +
* Get review from the mentors and fix the bugs pointed out.
 +
* Work on improving the existing shaders, by adding a dynamic character (changing with time) to make it more GPU intensive
 +
* Check the programs with the existing shaders.
 +
* Work on creating the backend to compile and execute the shaders that are loaded onto the editor.
 +
<br>
 +
'''2017-07-18:'''
 +
* Complete working on the previous shaders, and if build was failing earlier, fix the issues.
 +
* Research more about the post-processing effects, and begin working on bloom.
 +
* Continue working on the backend of Shader editor.
 +
* Start Documenting everything.
 +
<br>
 +
'''2017-07-25:'''
 +
* Begin working on the benchmarking suite's UI
 +
* Start working on HDR and Motion blur shaders, and fix issues in bloom shader if any.
 +
* Check compatibilty with the main application.
 +
* Shader editor should be able to properly execute the loaded shaders.
 +
* Documentation should be updated
 +
<br>
 +
'''2017-08-01:'''
 +
* Get reviews from the mentors of the latest build, and fix the issues, or make modifications pointed out by them.
 +
* Talk to the mentors regarding additional programs that can be suitable for the project.
 +
* Blur and HDR shaders should be bug-free, and integrate with main application.
 +
* Start researching about implementing instant compilation and execution as the user edits the shaders.
 +
* Documentation should be updated
 +
<br>
 +
'''2017-08-08:'''
 +
* Shader editor should be ready, with a complete on-the-go editor, and the main benchmarking application should also be ready by now.
 +
* Check for bugs in both the applications and talk to the mentors regarding the product.
 +
* Work on improving the documentation.
 +
<br>
 +
'''2017-08-15:'''
 +
* Prepare the final documentation and prepare a demonstration of the product.
 +
<br>
 +
 +
===Experience and approach===
 +
I am currently a 2nd year (4th semester) CSE undergraduate student at International Institute of Information Technology, Bangalore. I am a game developer by passion, and have experience working with Unity™ engine. Also, I am a part of a small Indie game-studio SJGR Studios. We have our repositories up on GitHub:
 +
https://github.com/Shit-Just-Got-Real-Studios. I am well experienced in languages like C, C++, C#, Java, Python, GLSL, and frameworks/libraries like SDL, SFML, GLEW, PyGame, OpenGL (With C++, Python, and Java). I am very comfortable working with the Git version control system and am an active contributor on Github.
 +
 +
== '''Previous Experience/Projects Undertaken:''' ==
 +
(Projects with a  ‘*’  mark similarity with the current GSoC project to be undertaken)
 +
 +
=== '''* GLSL Shaders Pack''' ===
 +
 +
I have made a few WebGL shaders, hosted on the website www.shadertoy.com/user/djeof1. I have made a few Ray marching shaders, and also shaders involving Voxel terrain marching. These techniques can be used for benchmarking of the GPU’s computation power.
 +
'''Github Repository:'''  <br>
 +
http://www.github.com/l0ftyWhizZ/Shadertoy-Shaders
 +
 +
 +
=== '''* DJINN IV & V Rendering Engines''' ===
 +
 +
DJINN – IV is a rendering engine, built using PyGame for basic window functions and input handling, and PyOpenGL for basic drawing.
 +
DJINN – V is under development, which is being built using LWJGL and Java.
 +
 +
'''Github Repositories:''' <br>
 +
https://www.github.com/l0ftyWhizZ/DJINN-IV <br>
 +
https://www.github.com/l0ftyWhizZ/DJINN-V <br>
 +
'''Applications developed using these engines:''' <br>
 +
https://github.com/l0ftyWhizZ/SoHo <br>
 +
https://github.com/l0ftyWhizZ/VOXINN
 +
 +
 +
 +
=== '''* AirPaint! (Raspberry Pi)''' ===
 +
 +
This is a small project, wherein using the ADXL345 Accelerometer sensor, connected to the Raspberry Pi through I2C, the motion of the sensor paints lines in the same motion on screen.
 +
Made using PyGame for basic drawing/rendering purpose.
  
2017-06-06: Milestone #1<br>
+
'''Github Repository:''' <br>
2017-06-13: Milestone #2<br>
+
http://www.github.com/l0ftyWhizZ/ADXL345
2017-06-20: Milestone #3<br>
 
2017-06-27: Milestone #4<br>
 
2017-07-04: Milestone #5<br>
 
2017-07-11: Milestone #6<br>
 
2017-07-18: Milestone #7<br>
 
2017-07-25: Milestone #8<br>
 
2017-08-01: Milestone #9<br>
 
2017-08-08: Milestone #10<br>
 
2017-08-15: Milestone #11<br>
 
  
===Experience and approach===
+
 
In 5-15 sentences, convince us you will be able to successfully complete your project in the timeline you have described.
+
=== '''Multiplayer 3D Car Racing Game''' ===
 +
 
 +
As a part of our Java course, me and two of my peers developed a Multiplayer (over LAN) 3D car racing game, in JMonkey 3.0 Engine, along with a basic path-tracing algorithm for the AI bot, and our own networking algorithm.
 +
 
 +
'''Github Repository:''' <br>
 +
https://github.com/l0ftyWhizZ/3D-Racing-Game
 +
 
 +
 
 +
=== '''VR & AR applications in Unity™ Engine''' ===
 +
 
 +
I have also been a part of the development of an educational Augmented-Reality application in Unity™ engine, using Vuforia™ SDK.
 +
Along with that, I have also developed a Virtual-Reality game, using the Google Cardboard™ SDK with Unity™.
 +
 
 +
'''Github Repositories:''' <br>
 +
https://github.com/l0ftyWhizZ/Vision <br>
 +
https://github.com/Lofty-Whizz-Studios/The-Exorcism
 +
 
 +
 
 +
''More projects of mine can be found here: https://www.github.com/l0ftyWhizZ''
  
 
===Contingency===
 
===Contingency===
Line 118: Line 241:
  
 
===Benefit===
 
===Benefit===
If successfully completed, what will its impact be on the BeagleBoard.org community? Include quotes from BeagleBoard.org community members who can be found on http://beagleboard.org/discuss and http://bbb.io/gsocchat.
+
If completed successfully, this project has a great potential since it can be used to improve the efficiency of the onboard GPU. Considering the small scale factor of the PowerVR GPU, this benchmarking tool can be used to enhance its efficiency. Moreover, the shader editor would also be helpful to the people who wish to test their shaders, and debug them instantly, since the intuitive editor would result in a better workflow.
  
 
===Suggestions===
 
===Suggestions===
 
* More details could have been given regarding what is exactly expected of the project, what kind of programs are to be developed and how many of them are to made.
 
* More details could have been given regarding what is exactly expected of the project, what kind of programs are to be developed and how many of them are to made.
 
* Nothing else to suggest
 
* Nothing else to suggest

Latest revision as of 09:24, 3 April 2017


BeagleBoard-GPUBx

{{#ev:youtube|2pKtmSPVdPw||right|BeagleBoard GPUBx}}

BeagleBoard GPUBx is an extensive toolkit for benchmarking the BeagleBoard GPU. This toolkit consists of verious programs/shaders, built using OpenGL ES 2.0 and GLSL, which involve complex mathematical computations for rendering, and we make the GPU perform these computations and based on how it performs, we benchmark it. Hence, this project provides an estimate of the efficiency of the onboard GPU. I have also created a YouTube video for a quick demonstration of my project. Along with this, GPUBx will also contain an editor which will allow the users to test and execute their shaders on the go.

Student: Shreyas Iyer
Mentors: Hunyue Yau, Robert Manzke
Code: https://github.com/l0ftyWhizZ/BeagleBoard-GPUBx
Wiki: http://elinux.org/BeagleBoard/GSoC/BeagleBoard-GPUBx
GSoC: GSoC entry

Status

This project is currently just a proposal.

Proposal

Goal: Sample programs to utilize the GPU for computations. The GPU on most of the BeagleBone/Board are limited to OpenGL ES 2; Goal is to provide sample programs to show using GLES2 for computations.
Software Skills: OpenGLES, GLSL
Reference: thebookofshaders
Possible Mentors: Hunyue Yau, Robert Manzke

About you

IRC: Shreyas
Github: My Github Profile
School: International Institute of Information Technology, Bangalore
Country: India
Primary language (We have mentors who speak multiple languages): English, Hindi
Typical work hours (We have mentors in various time zones): 12AM-8AM US Eastern (9:30 AM - 5:30 PM Indian Standard Time (IST))
Previous GSoC participation: This is my second attempt at GSoC. I had applied for a project under Copyleft Games last year, but unfortunately it didn't get accepted. I want to work on this project with BeagleBoard because me being a game developer and graphics programmer, I should learn to develop high quality shaders for my applications or games, and also in a way such that the algorithm is efficient for the GPU to render. This project serves the exact same purpose, and it will really help me in my career. Also, considering the the scale factor of the BeagleBoard, this analysis will help in improving the efficiency of the onboard GPU.

About your project

Project name: BeagleBoard GPUBx - A benchmarking tool for the BeagleBoard GPU, along with a shader editor

Description

Overview

BeagleBoard® Blue/Black contains an onboard PowerVR GPU, capable of 2D/3D rendering, which can be manipulated through OpenGL ES 2.0. Offloading complex instructions onto the GPU from the CPU is evidently advantageous, since presence of multiple cores in the GPU, all working in parallel, would result in a much faster computation, as compared to the CPU.
So, through this project, using GLSL along with OpenGL ES, we develop a benchmarking suite of programs which make use of custom shaders which provide intensive computations to the GPU, at the end of which we compute how efficient the GPU was, and in turn benchmark the GPU.
The GPU will be tested for various parameters, with programs like:

  • Fractal shaders, whose computation reflects the memory bandwidth of the GPU.
  • Graphically-intensive shaders (2D and 3D) for testing the rendering capability of the GPU, which includes:
        1 * Procedurally-generated terrain using noise functions (Perlin noise in particular),
changing the amplitude of the noise dynamically, and checking for performance drops.
2 * Ray marching in complex 3D environments.
3 * Ocean shader, using Perlin noise and Fresnel shading.
4 * Shaders where vertex data is altered according to noise function, resulting in a dynamically changing mesh. (3D)
  • Post-processing effects, such as Bloom, HDR, Depth of Field (DoF), and other advanced lighting effects.

These programs consitute the benchmarking tool, which is accompanied by a shader editor, wherein the users can easily load their shader code, and execute them on the go. Languages used: C++ along with OpenGL ES 2.0, and OpenGL Shading Language (GLSL).

Detailed

Aim

The aim of the project is to develop a tool, which tests the GPU in various parameters, be it in terms of metrics like Frames Per Second (FPS while rendering complex environments, or post-processing effects), or computation time (for computing solutions to complex problems involving FFT, or Linear Algebra). So, we develop different programs which will help us benchmark the GPU.

Approach

Before delving into the main content, we talk about the necessity of offloading the CPU into PU:

Background Theory: The main reason of offloading the CPU instructions to the GPU is to exploit the presence of multiple cores present in the GPU. CPUs have a small number of cores (limited to 4 in most of the cases), whereas the modern GPUs have several hundred cores, all running in parallel. Hence complex instruction sets can be offloaded to the GPU, resulting in a faster computation. Figure below shows the basic mechanism:

Offload.png

So, our aim is to exploit the parallel computing of the GPU, by developing shaders involving complex/intensive computations, and benchmarking the GPU based on how it performs.

Programs to be developed:

Fractal shaders:

-- Fractal shaders are generally derived from a considerably simple linear algebraic equation, which on computing per-pixel, results in a highly-detailed and complex structure. Generally, the algorithm goes through some iterations while drawing, and the computations during these iterations are sometimes resource-heavy even for a powerful system. So, we perform some pre calculations in order to achieve a decent frame rate. We make use of the parallel computing on the GPU, since rendering of a unique pixel doesn’t depend on nearby pixel’s information. Following is a GLSL code snippet:

Snip1.png

By modifying the values of the constant terms in the given snippet, CONSTANT_1 and CONSTANT_2 in particular, we get different fractals, out of which some can be categorized as Mandelbrot Fractals and Julia Fractals. Also, in order to push the GPU to the limit, we can keep on dynamically increasing the number of iterations the algorithm has to go through. I can even tweak the code to implement animated fractals, like pulsating and progressive fractals, by introducing the time varying attribute in the shader.
For reference, I am using this repository: https://github.com/jonathan-potter/shadertoy-fractal. Below is a simple Mandelbrot fractal:

Mandel.png

Graphically-Intensive shaders

-- In this section, we develop multiple shaders, all serving the same purpose - computing the ‘Frames Per Second’ while the GPU renders the environment. The shaders to be built are categorized as:
1) Ray Marching shaders: In this category, as the name suggests, the ray is “marched” every time frame by some step size, and at every step we check whether the ray has intersected with some primitive or not. It gives us an estimate of how far are we from a primitive, and base our rendering calculations around that. This method is useful while rendering volumetric primitives that are to be integrated along the ray.
Using this, we can build our shaders that result in a different ray marching environment with different levels of detail. I have already made a few ray marching shaders, which are up on my shadertoy profile as well as on my Github profile.
Following is a basic ray marching function in GLSL:

Raymarch.png

So, in the above snippet, modifying the values of the constants would yield a different output, be it different volumetric primitives, or different marching step size.
Using a similar technique, I have also developed a Voxel terrain marching shader, which is hosted on https://www.shadertoy.com/profile/djeof1. For ideas and references, I am using https://www.shadertoy.com.

2) Procedurally-generated terrain using Noise: As the name suggests, we initially create a flat ground, where the y-coordinate (or height) associated with each of the points are the same. Then using a modified version of the inbuilt noise function, such that the resulting terrain is acceptable in terms of variations in height and continuity, we modify the y-coordinate of each of the pixels according to the function, which results in an terrain, and then we perform ray marching over the terrain.
There are several inbuilt noise functions defined in GLSL like,

       vec2 noise (vec2 pixel), vec2 noise (vec3 pixel), vec3 noise (vec3 x), vec3 noise (vec2 x) ...

In order to create higher-quality noise functions, we add the noise functions upto a certain octave, for example we take a noise function and add it to itself once, we get a second octave, add it again, we get third octave, and so on. At higher octaves, the noise function reaches closer to being ideal.
Now, in order to make an ideal noise function, we must take care of the following major properties:

  1. It should be a continuous function giving the appearance of randomness.
  2. The function must have a well-defined range of outputs. (Between [-1,1] or [0,1])
  3. The values which are the result of this function must not show regular patterns.
  4. It must be isometric, that means it should be rotationally invariant.

Perlin's method of noise creation is considered to be one of the best noise functions for the purpose of terrain creation, and is widely used in almost every graphics related application, or games. Below is a snippet of generating the heightmap for each of the points of the terrain:

Noise.png

Here, hash() function returns basically a suitable floating point value on being passed the coordinate position of a point, which is then used by the main noise() generating function, which basically performs computations (mixes the hash values of the addition of a floating point number which was obtained using the passed parameter with a transformation matrix, and another variable obtained from the passed variable) on the parameter 'p' passed to it. This value of noise is used in the generateHeight() function, which adds this value to it's y-coordinate, hence we see an elevation in terrain. There are also scale factors which can be tweaked to obtain different terrain amplitude. For references, I am using the book OpenGL Shading Language, https://thebookofshaders.com and https://shadertoy.com

3) Ocean shaders: Ocean shaders are similar in construction as the procedural terrain generating shader. Here as well me make use of the noise, but we make it progressive, so as to generate the wave effect. Again, we can change the amplitude of the noise in order to change the amplitude of the wave. For reference, I am using https://shadertoy.com

4) Post-Processing effects: As the name suggests, these shaders are executed "post" rendering. Generally, the rendering data is stored in buffer and then rendered on the screen. Now after it is rendered, we can tweak it using custom shaders to add new properties to it. Post-processing effects are generally used to create life-like visuals/effects in games/graphical applications. Some of the most common post-processing effects include:

  • Bloom
  • HDR
  • Motion Blur

Post-processing effects are generally GPU intensive when done on a medium to large scale since it goes through a second render pass, and hence does more amount work in a frame than usual (without post-processing).
Now, our aim is to test the performance of the onboard PowerVR GPU against these shaders.
I) Bloom: Bright light sources and brightly lit regions are often difficult to convey to the viewer as the intensity range of a monitor is limited. One way to distinguish bright light sources on a monitor is by making them glow, their light bleeds around the light source. This effectively gives the viewer the illusion these light sources or bright regions are intensily bright. This effect which makes the light bleed along the fringes is called Bloom.
The procedure of performing a bloom effect, is to first create an HDR texture of all the light sources in the scene whose intensity exceed a certain threshold. Now, we perform computations on these light sources, and add blur to them. The resulting texture, is then added on top of the original texture in the scene, hence giving a bloom effect.

II) HDR: HDR stands for 'High Dynamic Range'. Monitors are limited to display colors in the range of 0.0 and 1.0, but there is no such limitation in lighting equations. By allowing fragment colors to exceed 1.0 we have a much higher range of color values available to work in. With high dynamic range bright things can be really bright, dark things can be really dark, and details can be seen in both.
Generally, the fragment color is stored in normal frame buffers, wherein the intensity of colors is clamped between [0.0, 1.0]. But, instead if we use floating-point framebuffers the values aren't immediately clamped, and hence these can be used in order to implement HDR.

Shader Editor

Along with the sample programs for testing the efficiency of the GPU, the user will also be provided with a shader editor, wherein the user can load his/her own vertex/fragment shaders, and execute them on the go. This is helpful for faster debugging, hence improving efficiency. I have created the base code for the application wrapper which currently has an independent 'Shader' class which opens up a fragment and vertex shader based on the FILENAME provided in the argument. This can be easily modified to load the shaders using GUI buttons.

Timeline

Provide a development timeline with a milestone each of the 11 weeks. (A realistic timeline is critical to our selection process.) Community Bonding period:

  • Get in touch with the mentors, and clear queries regarding the project, and get an insight of what is expected.
  • Learn about the BeagleBoard Black/Blue, about it's GPU and it's limitations and base my research around that. Check OpenGL ES compatibilty with BeagleBoard.

2017-06-06:

  • Start with the programs regarding mathematical computations (Linear Algebra and FFT). Refer to CLRS Algorithms book to learn about efficient ways to generate FFT.
  • Start developing main application which holds the entire benchmarking suite.
  • Start working on the GUI of shader editor.


2017-06-13:

  • Talk with the mentors regarding the programs developed in week 1, and get a review and check whether the build is passing or not.
  • Fix the bugs which are known or pointed out by the mentors, and make necessary changes as per their review.
  • Develop the main benchmarking suite application such that it loads the shaders from arguments passed.
  • Shader editor's GUI should be made more robust.


2017-06-20:

  • Start researching about dynamic fractals, and include a shader generating a Mandelbrot and Julia fractal.
  • Integrate these shaders into the main benchmarking suite, and check build status.
  • Generate unit tests for each of the programs, and the suite as well.
  • Learn about opening a file explorer to open/load the shaders into the editor.


2017-06-27:

  • Talk to the mentors, and get their review
  • Fix the bugs as pointed out by the mentors.
  • Search for more ideas about shaders which could be incorporated.


2017-07-04:

  • Shader editor's file picking should be completely implemented, and learn about storing it in a buffer.
  • Start working on the noise-based shaders - terrain and ocean shaders, and find more interesting variants about the same.
  • Integrate these shaders with the main application
  • Shader editor should now be able to load the shaders properly into the editor from the buffer.


2017-07-11:

  • Get review from the mentors and fix the bugs pointed out.
  • Work on improving the existing shaders, by adding a dynamic character (changing with time) to make it more GPU intensive
  • Check the programs with the existing shaders.
  • Work on creating the backend to compile and execute the shaders that are loaded onto the editor.


2017-07-18:

  • Complete working on the previous shaders, and if build was failing earlier, fix the issues.
  • Research more about the post-processing effects, and begin working on bloom.
  • Continue working on the backend of Shader editor.
  • Start Documenting everything.


2017-07-25:

  • Begin working on the benchmarking suite's UI
  • Start working on HDR and Motion blur shaders, and fix issues in bloom shader if any.
  • Check compatibilty with the main application.
  • Shader editor should be able to properly execute the loaded shaders.
  • Documentation should be updated


2017-08-01:

  • Get reviews from the mentors of the latest build, and fix the issues, or make modifications pointed out by them.
  • Talk to the mentors regarding additional programs that can be suitable for the project.
  • Blur and HDR shaders should be bug-free, and integrate with main application.
  • Start researching about implementing instant compilation and execution as the user edits the shaders.
  • Documentation should be updated


2017-08-08:

  • Shader editor should be ready, with a complete on-the-go editor, and the main benchmarking application should also be ready by now.
  • Check for bugs in both the applications and talk to the mentors regarding the product.
  • Work on improving the documentation.


2017-08-15:

* Prepare the final documentation and prepare a demonstration of the product.


Experience and approach

I am currently a 2nd year (4th semester) CSE undergraduate student at International Institute of Information Technology, Bangalore. I am a game developer by passion, and have experience working with Unity™ engine. Also, I am a part of a small Indie game-studio SJGR Studios. We have our repositories up on GitHub: https://github.com/Shit-Just-Got-Real-Studios. I am well experienced in languages like C, C++, C#, Java, Python, GLSL, and frameworks/libraries like SDL, SFML, GLEW, PyGame, OpenGL (With C++, Python, and Java). I am very comfortable working with the Git version control system and am an active contributor on Github.

Previous Experience/Projects Undertaken:

(Projects with a ‘*’ mark similarity with the current GSoC project to be undertaken)

* GLSL Shaders Pack

I have made a few WebGL shaders, hosted on the website www.shadertoy.com/user/djeof1. I have made a few Ray marching shaders, and also shaders involving Voxel terrain marching. These techniques can be used for benchmarking of the GPU’s computation power. Github Repository:
http://www.github.com/l0ftyWhizZ/Shadertoy-Shaders


* DJINN IV & V Rendering Engines

DJINN – IV is a rendering engine, built using PyGame for basic window functions and input handling, and PyOpenGL for basic drawing. DJINN – V is under development, which is being built using LWJGL and Java.

Github Repositories:
https://www.github.com/l0ftyWhizZ/DJINN-IV
https://www.github.com/l0ftyWhizZ/DJINN-V
Applications developed using these engines:
https://github.com/l0ftyWhizZ/SoHo
https://github.com/l0ftyWhizZ/VOXINN


* AirPaint! (Raspberry Pi)

This is a small project, wherein using the ADXL345 Accelerometer sensor, connected to the Raspberry Pi through I2C, the motion of the sensor paints lines in the same motion on screen. Made using PyGame for basic drawing/rendering purpose.

Github Repository:
http://www.github.com/l0ftyWhizZ/ADXL345


Multiplayer 3D Car Racing Game

As a part of our Java course, me and two of my peers developed a Multiplayer (over LAN) 3D car racing game, in JMonkey 3.0 Engine, along with a basic path-tracing algorithm for the AI bot, and our own networking algorithm.

Github Repository:
https://github.com/l0ftyWhizZ/3D-Racing-Game


VR & AR applications in Unity™ Engine

I have also been a part of the development of an educational Augmented-Reality application in Unity™ engine, using Vuforia™ SDK. Along with that, I have also developed a Virtual-Reality game, using the Google Cardboard™ SDK with Unity™.

Github Repositories:
https://github.com/l0ftyWhizZ/Vision
https://github.com/Lofty-Whizz-Studios/The-Exorcism


More projects of mine can be found here: https://www.github.com/l0ftyWhizZ

Contingency

In case I am stuck at a particular phase while development, and also my mentor isnt available, then following are the steps that I would take (in no particular order) :

  • I would talk to the members of the BeagleBoard community by posting my queries on the mailing lists, and also the IRC.
  • I would look up for the solution or similar on the internet. With websites like stackoverflow.com, and even shadertoy.com, the community is extremely friendly and helpful.
  • I would also ask my Computer vision and graphics professor at my institute for guidance.

Benefit

If completed successfully, this project has a great potential since it can be used to improve the efficiency of the onboard GPU. Considering the small scale factor of the PowerVR GPU, this benchmarking tool can be used to enhance its efficiency. Moreover, the shader editor would also be helpful to the people who wish to test their shaders, and debug them instantly, since the intuitive editor would result in a better workflow.

Suggestions

  • More details could have been given regarding what is exactly expected of the project, what kind of programs are to be developed and how many of them are to made.
  • Nothing else to suggest