Difference between revisions of "EBC Exercise 20 The Display SubSystem (DSS)"

From eLinux.org
Jump to: navigation, search
(Part a - OSD Setup)
m (video_thread.c)
Line 104: Line 104:
 
'''video_thread_fxn()''' utilizes the following:
 
'''video_thread_fxn()''' utilizes the following:
 
; video_input_setup()
 
; video_input_setup()
: these function is used to detect the video standard which the web cam is using and calculate the corresponding buffer size of a single video frame. It opens and configures the Linux V4L2 video capture driver. The driver uses driver-allocated (as opposed to user-allocated) buffers that are mmapped to the user space to store video frames. Advanced users can take a look in '''video_input.c''' to see the details.
+
: This function is used to detect the video standard which the web cam is using and calculate the corresponding buffer size of a single video frame. It opens and configures the Linux V4L2 video capture driver. The driver uses driver-allocated (as opposed to user-allocated) buffers that are mmapped to the user space to store video frames. Advanced users can take a look in '''video_input.c''' to see the details.
  
 
; fopen()
 
; fopen()

Revision as of 16:35, 22 October 2012

thumb‎ Embedded Linux Class by Mark A. Yoder


This is a four part lab that explore the video system drivers. The labs demonstrate the Linux V4L2 and FBdev drivers as well as basic file I/O, through four small applications: on-screen display (OSD), video recorder, video player, and video loop-thru (video-capture copied to video-display). It's based on the materials used in TI's DaVinci System Integration using Linux Workshop. The workshop is based on the DVEVM. I've converted those materials to the BeagleBoard.

This lab shows three ways to write directly to the framebuffer (/dev/fb0) on the Beagle:

  1. Write a single value to the entire framebuffer
  2. Write the pixel values based on a formula
  3. Read a bmp image and write it

The following labs will show how to bring in live video from a web cam via V4L2 and save it to a file, write video from a file to a video framebuffer (/dev/fb1), and finally move live video from a web cam to the framebuffer. A future lab may show how to process the video on the DSP before displaying it.

This is the first of 4 exercises:

  • Part a: You will build an on-screen display for the your device using the FBDEV driver – INSPECTION LAB only.
  • Part b: Examines v4L2 video capture via a simple video recorder application – INSPECTION LAB only.
  • Part c: Examines the v4L2 display driver using via a video display application. This application plays back the file captured in lab 07b – INSPECTION LAB only.
  • Part d: You will combine the recorder and player applications into a video loop-thru application using memcpy to transfer data between capture and display drivers.

Part a - OSD Setup

The goal of this part is to build an on_screen display for the Beagle using the FBDEV driver. From a coding perspective, it’s an inspection lab only.

  1. Create your own custom picture for the OSD window (using gimp), saving the picture to 32-bit format RGBA.
  2. Inspect video_thread.c and helper functions (inside video_osd.c).
  3. Build, run. Result: see your custom banner displayed on screen (no video yet…).
  • Get the files from the git repository

Make sure you have the most up to date versions.

beagle$ cd ~/exercises
beagle$ git pull
  • Change to the directory: videoThru/lab07a_osd_setup
  • Open the Gimp (open-source) paint program by typing “gimp Rose640x480a.bmp” in the terminal.
  • Edit the file to create a custom banner picture.

This file is 640 (width) by 480 (height). All the code that follows assumes the image is this size. (You may change the size, but you'll also have to change the code.)

  • Paint something for your OSD banner. You can create a simple graphic quickly using just three of the many tools.
    • Before clicking any of the tools, you can choose a color first using the color box.
    • Start with the gradient tool to create a background. Select the tool, then click and drag the mouse over the 640x480 image area.
    • Add text or paint something over the gradient with either of these tools, respectively.

GimpTools.png

  • Save your file and exit Gimp.

200px‎

When you are finished, save with File:Save. Then exit Gimp. The file should save as RGBA, that is red, green blue, alpha. Where alpha is the transparency. It DOES matter what you name the file because later, during the building process, this file is specifically copied to the target. This name also makes it easy to remember it’s a 640x480 Bitmap image in 24-bit RGBA.

  • Make sure your file is saved to your lab folder.
  • List the contents of the lab folder

Examine two of the video files

video_osd.c

video_osd.c contains a number of functions for manipulating the on-screen display. These functions are not part of an official package, but were developed for these lab exercises to demonstrate the capabilities of the on-screen display hardware.

  1. video_osd_place(): places a picture on the OSD display. Assumes data is provided in 32-bit ARGB (8-bit attribute transparency, 8-bit red, 8-bit blue 8-bit green per pixel).
  2. video_osd_scroll(): a more complex version of video_osd_place() that will offset the OSD display by x and/or y scroll values. This can be used to scroll a banner or picture horizontally or vertically.
  3. video_osd_circframe(): draws a circular alpha-blended frame around the video output.

osd_thread.c

The thread function in osd_thread.c is video_thread_fxn() which uses the helper functions from video_osd.c which supports alpha blending:

  1. calls video_osd_setup() to open the OSD window. This window is memory mapped (mmap’ed) into the application space and a handle to the Display object is returned and stored in osdFd.
  2. calls fopen() to read the custom banner picture (as created in gimp and stored in a 32-bit ARGB bitmap file).
  3. The OSD buffer is initialized by placing the picture buffer using video_osd_place(), which places the picture on the OSD window with a y offset of 200).
  4. Also a call is made to video_osd_circframe() to initialize the OSD buffer with a circular semi-transparent (0x80 is 75% transparency) green frame (0x0000FF is blue, 0x00FF00 is green, 0xFF0000 is red).

The application assumes the picture is supplied in a 640 x 480 32-bit ARGB (8-8-8-8) format, which should be the case if you followed the previous gimp instructions.

  • Build and execute the application.
beagle$ make debug
beagle$ ./videoThru_DEBUG.Beagle

What do you see?

Part a Questions

  1. How would you modify the lab07a_osd_setup application to make the banner you created semi-transparent instead of solid?
  2. How would you modify the lab07a_osd_setup application to place your banner at the top of the screen?
  3. You can build a "Release" version of the code with make release. Time both the debug and release versions of the code.
  4. The current code places a green rectangle with a red circle (ellipse?) on it in the upper left corner. Modify the code to place a blue rectangle with a yellow circle in the bottom right.
  5. (Advanced) Why is the osdFb preceded with an ampersand (&) in the call to video_osd_setup( &osdFd, FBVID_OSD, 0x40, &osdDisplay )?

Part b - Video Capture

The goal of Part b is to examine v4L2 video capture via a simple video recorder app. This is an Inspection lab only.

  1. Examine helper functions (setup, cleanup, wait_for_frame) in video_input.c.
  2. Examine video_thread_fxn() in video_thread.c.
  3. Examine main.c (how the signal handler is created/used, then calls video_thread_fxn).
  4. Build, run. Result: create a file (video.raw) that contains about 2s of captured video.
  • Change to the directory: lab07b_video_capture

Examine the video files

video_thread.c

This file contains a single function, video_thread_fxn(). This function encapsulates the functionality necessary to run the video recorder and is analogous to the audio_thread_fxn() that was used in the previous lab.

video_thread_fxn() utilizes the following:

video_input_setup()
This function is used to detect the video standard which the web cam is using and calculate the corresponding buffer size of a single video frame. It opens and configures the Linux V4L2 video capture driver. The driver uses driver-allocated (as opposed to user-allocated) buffers that are mmapped to the user space to store video frames. Advanced users can take a look in video_input.c to see the details.
fopen()
Standard Linux I/O (i.e from #include <stdio.h>) call to open a file where the captured video data will be written
for() loop
  • Loops through 100 cycles so as not to overflow /tmp directory’s RAM memory
  • ioctl( captureFd, VIDIOC_DQBUF, &v4l2buf ): dequeues the next video frame from the V4L2 driver (blocks/pauses if buffer is not available, yet).
  • fwrite(): The video frame is copied into the file.
  • ioctl( captureFd, VIDIOC_QBUF, &v4l2buf ): Once the application has finished writing the video buffer to the file, the buffer handle must be passed back to the driver so that it can be refilled with new video data.

main.c

This is the entry point for the application. main() does the following:

  • Creates a signal handler to trap the Ctrl-C signal (also called SIGINT, the interrupt signal). When this signal is sent to the application, the videoEnv.quit global variable is set to true to signal the video thread to exit its main loop and begin cleanup.
  • After configuring this signal handler, main() calls the video_thread_fxn() function to enter into the video thread. Upon completion of this function, main() checks the return value of the function (success or failure) and reports.

Build and run

  • Run the application

Enter:

beagle$ make
beagle$ ./videoThru_DEBUG.Beagle

You will get a message from the application indicating that it has captured video frames. Check the following to ensure that the video has recorded properly:

beagle$ ls –lsa /tmp/video.raw

The file should be about 60 MB in size. The reason that the application only records 100 video frames is to keep from overflowing the /tmp directory.

Part c - Video Playback

The goal of this part is to examine FBdev display driver using a video display app. This app will play back the file recorded in Part b (and add OSD from Part a). This is an inspection lab only.

  1. Examine video_output.c and its helper functions.
  2. Ensure /tmp/video.raw still exists.
  3. Build, run. Result: video.raw file is displayed on the screen (along with your OSD).
  • Change to lab07c_video_playback.

Examine video_thread.c

As opposed to the recorder, this application uses ioctl calls to manage the framebuffer at /dev/fb1 to display video frames it reads from the /tmp/video.raw file:

fopen() and fread()
Opens the input file containing captured video frames.
video_osd_setup()
Creates a table of appropriately sized buffers to hold video frame data that is read from the file. The driver uses driver-allocated buffers which are mapped user space to store video frames.
while() loop
  • Loops until ctrl-C is pressed or input file is depleted
  • fread(): The next video frame is read from the file and copied into the

video buffer.

  • flip_display_buffers(): Uses ioctl calls to tell the display subsystem to display the next buffer.
  • Build and run the application.
  • Check to make sure /tmp/video.raw exists and has a file size larger than zero.

Use “ls –lsa /tmp/video.raw” to verify that video.raw exists and has a greater than zero file size. The application is hard coded (using a #define statement in video_thread.c) to read data from the file /tmp/video.raw. If you have powered off or reset since running Part b, the video.raw file will have been cleared. If so, go back to lab07b_video_capture to create the video.raw file again.

Examine main.c

This main.c is the same as before, except we've added a couple of system() calls to display the video buffer and then hide it. The string in the system() call is run in a shell. In this case we are using vid1Show to display the video frame buffer.

  1. Examine setDSSpaths. What does it do?
  2. Examine vid1Show. What does it do? How do you make the video frame buffer display in a different location on your screen?
  3. How do you change the size of the video display?
  4. The line echo 0 > $mgr0/alpha_blending_enabled sets transparency. Change echo 0 to echo 1. What happens? Line up the video display with your Rose .bmp file. What happens with the transparent parts of the .bmp file?

Part d - Video Loopthru

The goal of Part d is to combine the recorder (Part b) and playback (Part c) into a video loopthru application.

  • Hey – YOU get to do this yourself (no more inspection stuff…).
  1. Answer a few questions about the big picture.
  2. Copy files from Part c (playback) to Part d (loopthru).
  3. Add video input files from Part b (record) to Part d (loopthru).
  4. Make code modifications to stitch the record to the playback.
  5. Build, run. Result: video is captured (v4L2) and then displayed (FBdev) with your OSD.

In this portion of the lab, you will combine Parts a (video capture) and b (video playback) into a single video loop-thru application.

In Part b, we recorded video from the web cam via the v4L2 input and placed it into a file (video.raw) – this used an fwrite() command to write the video buffer to a file. In Part c, we did an fread() of the video.raw file and sent that video to the display driver.

We now have the input (capture) application (Part b) and the output (display) application (Part c) that you will now combine into a single application (Part d). We’ll need to get rid of the “file reads/writes” and replace them with a memcpy operation to copy data from Capture driver buffers to Display driver buffers.

Before Starting

Before we start copying, cutting, and pasting files and code, let’s think about what must be done to get the loopthru lab to work.

  • In Part b, we used fwrite() to PUT (write) the video data to the video.raw file. What two functions were used to GET (read) the video data from v4L2 driver and return the video buffer back to the driver once the application has recorded the data?
GET video data: 1.

2.

PUT video data: 1. fwrite() the video frame
  • Similarly, in Part c, we used the functions listed below to PUT (write) the data to the FBdev driver. What function is used to GET (read) the video data?
GET video data: 1.
PUT video data: 1. flip_display_buffers()

In this lab exercise, we will start with the Part c files, then edit them to create the loopthru code. Based on this, generally what functions should be required for our while() loop in the Part d Video Loopthru?

GET video data: 1.

2.

Copy video data: 1. memcpy to copy from input to output
PUT video data: 1.

2.

To summarize, the following lab procedure will take the _capture and _playback files and combine them into a loopthru example.

Now, go do it.

Demo your working program

Once you have the video through working, demo your program to get credit.




thumb‎ Embedded Linux Class by Mark A. Yoder