Difference between revisions of "BeagleBoard/bb blue api v2"

From eLinux.org
Jump to: navigation, search
(About you)
(About your project)
 
(4 intermediate revisions by 2 users not shown)
Line 3: Line 3:
 
[[Category: GSoCProposal]]
 
[[Category: GSoCProposal]]
  
=[[BeagleBoard/GSoC/ProposalTemplate|ProposalTemplate]] =
+
=[[BeagleBoard/bb_blue_api_v2|bb_blue_api_v2]] =
 
{{#ev:youtube|Jl3sUq2WwcY||right|BeagleLogic}} <!-- latest video will go here -->
 
{{#ev:youtube|Jl3sUq2WwcY||right|BeagleLogic}} <!-- latest video will go here -->
  
 
A short summary of the idea will go here.
 
A short summary of the idea will go here.
  
''Student'': [http://elinux.org/User:New Student]<br>
+
''Student'': [http://elinux.org/Kiran.l13:Kiran Kumar Lekkala]<br>
 
''Mentors'': [http://elinux.org/User:Jkridner Jason Kridner]<br>
 
''Mentors'': [http://elinux.org/User:Jkridner Jason Kridner]<br>
''Code'': https://github.com/BeaglePilot<br>
+
''Code'': https://github.com/kiran4399/beaglecv<br>
''Wiki'': http://elinux.org/BeagleBoard/GSoC/ProposalTemplate<br>
+
''Wiki'': http://elinux.org/BeagleBoard/bb_blue_api_v2<br>
 
''GSoC'': [https://summerofcode.withgoogle.com/archive/2016/projects/4885400476712960/ GSoC entry]<br>
 
''GSoC'': [https://summerofcode.withgoogle.com/archive/2016/projects/4885400476712960/ GSoC entry]<br>
 
<div style="clear:both;"></div>
 
<div style="clear:both;"></div>
Line 31: Line 31:
  
 
==About your project==
 
==About your project==
''Project name'': Super Awesome Project<br>
+
''Project name'': Stereo Vision support for BeagleBone using BeagleCV<br>
 +
The aim of the project is to create support for the stereo camera on the BeagleBone using the BeagleCV. This would consist of developing the BeagleCV library and creating OpenGL ES2 wrappers for SVX530 3D accelerator present onboard. This would enable faster computation. and then implementing the stereo vision algorithm on the GPU. Finally if time permits, this project will be added to the BeagleBone Blue APIs.
  
 
===Description===
 
===Description===
Following are the complete set of project goals and their challenges which I plan to deliver at the end of the tenure:
+
The kernel which I would using is 4.4.56-bone17. This version has prebuilt kernel modules omaplfb, tilcdc and pvrsrvkm which are essential for running SGX530. Following are the complete set of project goals and their challenges which I plan to deliver at the end of the tenure:<br>
Stereo camera support for the Blue: The very first step of this project is to develop the kernel drivers for the devices/modules on BB Blue. Presently most of the kernel drivers for the onboard modules are present in 4.1 kernel. I have already configured the kernel driver for MPU-9250 which is a combination of MPU-6050 and AKM8963. I would also implement an optical flow algorithm to determine the planar velocities./
+
Utilizing the onboard GPU: BeagleBone Blue/Black has an inbuilt PowerVR SGX530 3D accelerator which is capable of performing graphical processing. By the end of this project, I will create some sample vision programs using the OpenGL ES2 to utilize the GPU and also create appropriate documentation of how to use the SGX530 on the BeagleBone Blue/Black.
  
Porting userspace drivers to Kernel: The very first step of this project is to develop the kernel drivers for the devices/modules on BB Blue. Presently most of the kernel drivers for the onboard modules are present in 4.1 kernel. I have already configured the kernel driver for MPU-9250 which is a combination of MPU-6050 and AKM8963.
+
Implementing vision algorithms for the camera: One of the most important deliverable of this project is to develop the develop BeagleCV library. BeagleCV is a library forked from the popular real-time CV library libCVD to minimize the complexity as much suited for BeagleBone. There are lot of image operating APIs which can be used for implementing vision algorithms. I will be using these APIs along with OpenGL ES APIs to implement stereo matching algorithm.
  
Kernel driver support for applications: Once the Kernel drivers are created for the peripherals, these will added to the applications requiring them. Some of the applications are:
+
Stereo camera support for the BeagleBone: Configuring and utilizing a OV9712 based stereo camera for extracting the Depth of Field (DoF) on the BeagleBone using BeagleCV. This implementation would have a lot of impact on the embedded robotics community.
Ardupilot
+
Documentation and examples: I will provide extensive and accurate documentation for whatever I build in this project. Functional documentation for the BeagleCV will be done in doxygen. Code documentation will be as comments in the source file.
ROS
 
Cloud
 
  
Ardupilot support for BB Blue: By the end of this project BB Blue support for Ardupilot will be developed which essentially involves implementing the AP_HAL APIs using the BB Blue APIs. mavlink will also be integrated with the APIs which would enable the board to communicate with other mavlink supported applications.
 
Rotatory
 
Standalone ROS for BB Blue: The APIs will also be used for interfacing the onboard devices with the ROS middleware by creating some custom packages mentioned below:
 
bbb_io: ROS node to get BB Blue input from buttons and output the control signals to the LEDs.
 
bbb_dcmotor: ROS package that launches a node to control a DC motor connected to the BB Blue.
 
bbb_servo: ROS package that launches a node to control a ESC connected to the BB Blue servo port.
 
bbb_encoders: ROS node which publishes the encoder data from BB Blue to a topic.
 
bbb_imu: BeagleBone Blue ROS package to publish the Invensense MPU-9250 data into a topic.
 
bbb_baro: ROS package to publish the data from BMP-280 values to a topic.
 
  
Following ROS packages will also be tested on BB Blue to check the connectivity:
+
(Future work) Adding BeagleCV support for BeagleBone Blue APIs: Once BeagleCV as been implemented and the v1 released, I will add the support to the BeagleBone Blue API repository. This would enable users to implement sensor fusion algorithms that help in robotic localization, tracking, detection and navigation.
mavros: ROS interface for mavlink
 
roscopter: ROS interface for ArduCopter using Mavlink 1.0. Using this package, a quadcopter can be controlled directly from ROS overriding the RC commands.
 
 
Documentation and examples: I will provide extensive and accurate documentation for whatever I build in this project. Functional documentation for the APIs will be done in doxygen. Code documentation will be as comments in the source file.
 
  
 
===Timeline===
 
===Timeline===
Provide a development timeline with a milestone each of the 11 weeks. (A realistic timeline is critical to our selection process.)
+
Google Summer of Code stretches over a period of 12 weeks with the Phase-1, Phase-2 and final evaluations in the 4th, 8th and the 12th week respectively. Following are the timelines and milestones which I want to strictly follow throughout the project tenure:<br>
  
2017-06-06: Milestone #1<br>
+
 
2017-06-13: Milestone #2<br>
+
May 30 - June 13  Week-1,2<br>
2017-06-20: Milestone #3<br>
+
Aim: Cleaning, Minimizing and testing bare BeagleCV
2017-06-27: Milestone #4<br>
+
Description: I created beaglecv from libcvd, a C++ library designed to be easy to use and portable for real time applications. First step is to make the library only command line. All unnecessary stuff like X11, GUI, OpenGL text rendering, imagedisplay etc. would be removed to increase the efficiency of the library. SGX530 will be tested and installation will be documented.
2017-07-04: Milestone #5<br>
+
 
2017-07-11: Milestone #6<br>
+
 
2017-07-18: Milestone #7<br>
+
June 14 - June 27  Week-3,4<br>
2017-07-25: Milestone #8<br>
+
Aim: Implementing functional OpenGL ES2 APIs for BeagleCV
2017-08-01: Milestone #9<br>
+
Description: Little support on OpenGL is present in BeagleCV. My job this week is to remove any display related APIs and to increase the computation APIs. Implementing new GL helper functions apart from the original ones (found here) which came with libCVD will also be done in this week. Image would be preprocessed using OpenGL APIs and converted to greyscale after the  along with some performance optimization methods suggested by Singhal et al. for image processing on SGX530. These shader techniques help in consuming low memory and processing
2017-08-08: Milestone #10<br>
+
Floating Point precision control:
2017-08-15: Milestone #11<br>
+
Loop Unrolling: supported by OpenGL ES2
 +
Branching
 +
Load sharing between vertex and fragment shaders
 +
Texture compression: Images will be converted to POWERVR Texture Compression (PVRTC) format with a compression ration of 8:1.
 +
 
 +
 
 +
July 28  - July 4  Week-5<br>
 +
Aim: Writing examples using functional APIs
 +
Description: Example programs will be written which makes use of the coded APIs present in the repository will be tested and any bugs pertaining to this source code will be fixed.
 +
 
 +
July 5 - July 11  Week-6<br>
 +
Aim: Performance evaluation and documentation of BeagleCV
 +
Description: Reserve week for testing examples from the BeagleCV repository and checking the performance of those examples wrt to CPU, GPU and CPU+GPU support. All the functionalities will be properly documented.
 +
 
 +
 
 +
July 12 - July 25  Week-7,8<br>
 +
Aim: Block Matching (BM) algorithm using BeagleCV
 +
Description: BM algorithm is the most widely used stereo matching algorithm used in the embedded community because of its least processing utility. Other methods like BE, SGM etc. are very costly and require high-end GPUs for real time processing. Bare version of the algorithm would be implemented without any I/O integration.
 +
 
 +
 
 +
July 26 - August 8 Week-9,10<br>
 +
Aim: Stereo camera configuration and integration of I/O in the stereo algorithm
 +
Description: This is the core of the stereo algorithm. Description: I will be using an OV9712 stereo camera for this project which is v4l2 compatible. For the 2 cameras there will be 2 v4l2 devices which can be used for streaming images.
 +
 
 +
 
 +
August 9 - August 15  Week-11<br>
 +
Aim: Testing stereo matching algorithm and checking performance
 +
Description: Generating the final disparity map given 2 stereo images. Checking the accuracy and documenting the approach.
 +
 
 +
August 16 - August 22  Week-12<br>
 +
Aim: FINAL EVALUATION !!
 +
Description: Checking for code smells and bugs. Refining the previous documentation so that it is more easy to understand. Checking the final implementation and doing the run-through again.
 +
 
 +
August 23 onwards: Future work<br>
 +
Aim: Adding BeagleCV with BeagleBone Blue APIs
 +
Description: Once BeagleCV is stable, it will be added in the BeagleBone Blue APIs to create support for vision along with other sensors. I will also maintain the BeagleCV library by adding more algorithms and fixing any bugs pertaining to BeagleBone.
  
 
===Experience and approach===
 
===Experience and approach===
In 5-15 sentences, convince us you will be able to successfully complete your project in the timeline you have described.
 
  
 
I am a fourth-year undergraduate student studying in India. Besides having a key interest towards networking, robotics and systems-related courses I also like hacking on embedded electronics. I like to work on an open-source project this summer because it is interesting and contributing to the project is fun and exciting. I did not work much on open-source before, but I have some idea about how things work in open-source community which seem to be very fascinating.
 
I am a fourth-year undergraduate student studying in India. Besides having a key interest towards networking, robotics and systems-related courses I also like hacking on embedded electronics. I like to work on an open-source project this summer because it is interesting and contributing to the project is fun and exciting. I did not work much on open-source before, but I have some idea about how things work in open-source community which seem to be very fascinating.
  
Accurate and Augmented Localization and Mapping for Indoor Quadcopters: In this project, a state-estimation system for Quadcopters operating in indoor environment is developed that enables the quadcopter to localize itself on a globally scaled map reconstructed by the system. To estimate the pose and the global map, we use ORB-SLAM, fused with onboard metric sensors along with a 2D LIDAR mounted on the Quadcopter which helps in robust tracking and scale estimation. [github]
+
Direct SLAM using Unsupervised CNN: Modified LSD-SLAM system, by incorporating Unsupervised CNN for estimating the depth for every frame and then optimizing the pose graph based on the inverse-depth map. I also built a mobile application which uses the camera feed and the real-time metric data from the sensors to generate a dense 3D reconstruction of the surroundings.
 +
 
 +
Object segmentation and tracking in RGB-D images: Developed a robust segmentation method using deep learning which accurately extracts an object from an RGB-D image and subsequently tracks the object in the RGB-D stream. Currently this is an ongoing project.
 +
 
 +
Accurate and Augmented Localization and Mapping for Indoor Quadcopters: In this project, a state-estimation system for Quadcopters operating in indoor environment is developed that enables the quadcopter to localize itself on a globally scaled map reconstructed by the system. To estimate the pose and the global map, we use ORB-SLAM, fused with onboard metric sensors along with a 2D LIDAR mounted on the Quadcopter which helps in robust tracking and scale estimation.
  
Enhancing ORB-SLAM using IMU and Sonar: Increased the accuracy and robustness of ORB-SLAM by integrating Extended Kalman Filter (EKF) by fusing the IMU and sonar measurements. The scale of the map is estimated by a closed form Maximum Likelihood approach. [github]
+
Enhancing ORB-SLAM using IMU and Sonar: Increased the accuracy and robustness of ORB-SLAM by integrating Extended Kalman Filter (EKF) by fusing the IMU and sonar measurements. The scale of the map is estimated by a closed form Maximum Likelihood approach.
  
Semi-Autonomous Quadcopter for Person Following: Developed an IBVS based robotic system, implemented on Parrot AR Drone, which is capable of following a person or any moving object and simultaneously measuring the localized coordinates of the quadcopter, on a scaled map. [github]
+
Semi-Autonomous Quadcopter for Person Following: Developed an IBVS based robotic system, implemented on Parrot AR Drone, which is capable of following a person or any moving object and simultaneously measuring the localized coordinates of the quadcopter, on a scaled map.
  
API Support for Beaglebone Blue: Created easy-to-use APIs for Beaglebone Blue. With these APIs, applications can be directly ported onto the board. This project was a collaboration of Beagleboard.org with the University of California, San Diego as part of Google Summer of Code 2016. [github]
+
API Support for BeagleBone Blue: Created easy-to-use APIs for BeagleBone Blue. With these APIs, applications can be directly ported onto the board. This project was a collaboration of Beagleboard.org with the University of California, San Diego as part of Google Summer of Code 2016.
  
Intelligent Parking System for Autonomous Robot: Using Beaglebone Black as an onboard microcontroller, the robot finds the park set-point by matching features using SURF descriptors on the template image and directs the actuators connected to PRU (Programmable Real-time Unit). https://github.com/kiran4399/parking_system_cv
+
Constructing Artificial Potential Fields from PD Flow: Worked on constructing Artificial Potential Fields using the Primal-Dual algorithm PD flow, which efficiently calculates the dense Scene flow of an RGB-D camera.
 +
 
 +
Intelligent Parking System for Autonomous Robot: Using BeagleBone Black as an onboard microcontroller, the robot finds the park set-point by matching features using SURF descriptors on the template image and directs the actuators connected to PRU (Programmable Real-time Unit).
 +
 
 +
Intelligent Parking system: This module is a part of ADS(Autonomous Driving System) used for accurate autonomous parking. The BeagleBone Black in the robot finds the set point by matching features using SURF descriptors on the template image and directs the output to the actuators(motors) connected to PRU(Programmable Real-time Unit).
  
 
I plan all my work properly and sketch out a perfect routine so that the work planned gets completed within the given time. I always sketch out priorities and keep priority management above time management. My policy is: “Hard-work beats talent when talent doesn't work hard !!”. I strongly feel that striving to know something is the best way to learn something.
 
I plan all my work properly and sketch out a perfect routine so that the work planned gets completed within the given time. I always sketch out priorities and keep priority management above time management. My policy is: “Hard-work beats talent when talent doesn't work hard !!”. I strongly feel that striving to know something is the best way to learn something.
Line 94: Line 122:
  
 
===Contingency===
 
===Contingency===
What will you do if you get stuck on your project and your mentor isn’t around?
+
If I get stuck on my project and I don’t find my mentor, I will google the error and research about it myself. I personally feel that there is nothing in this world which is not present on the internet. I will also try take help from the other developers present on the IRC.
  
 
===Benefit===
 
===Benefit===
kiran4399: I already read and watched a video on how BB Blue can be used to make robotics education much more accessible to students. By developing these APIs, students are interested in applying the high-level concepts like localization, mapping, pose estimation, position control etc. would be benefited. Also hobby enthusiasts who are interested in making different kinds of robots will also benefit from this project.
+
kiran4399: As a robotics researcher, I personally feel that just by using the data from low-level sensors. By developing these BeagleCV, its functionalities and applications, students be greatly benefited can apply many high-level concepts like visual tracking, localization, detection, pose estimation etc. By adding BeagleCV to BeagleBone APIs, it would be very easy to implement sensor-fusion algorithms.
 +
roject.
  
 
===Suggestions===
 
===Suggestions===
Is there anything else we should have asked you?
+
I plan all my work properly and sketch out a perfect routine so that the work planned gets completed within the given time. I always sketch out priorities and keep priority management above time management. My policy is: “Hard-work beats talent when talent doesn't work hard !!”. I strongly feel that striving to know something is the best way to learn something.
 +
I can assure that I will work around 50-55 hours a week without any other object of interest. I also hope for lot of learning experience throughout the program and come closer to the open-source world.

Latest revision as of 08:02, 29 March 2017


bb_blue_api_v2

{{#ev:youtube|Jl3sUq2WwcY||right|BeagleLogic}}

A short summary of the idea will go here.

Student: Kumar Lekkala
Mentors: Jason Kridner
Code: https://github.com/kiran4399/beaglecv
Wiki: http://elinux.org/BeagleBoard/bb_blue_api_v2
GSoC: GSoC entry

Status

This project is currently just a proposal.

Proposal

Please complete the requirements listed on the ideas page and fill out this template.

About you

IRC: kiran4399
Github: https://github.com/kiran4399
School: Indian Institute of Information Technology, SriCity
Country: India
Primary language (We have mentors who speak multiple languages): English
Typical work hours (We have mentors in various time zones): 8AM-5PM IST
Previous GSoC participation: https://summerofcode.withgoogle.com/archive/2016/projects/6295262146330624/

About your project

Project name: Stereo Vision support for BeagleBone using BeagleCV
The aim of the project is to create support for the stereo camera on the BeagleBone using the BeagleCV. This would consist of developing the BeagleCV library and creating OpenGL ES2 wrappers for SVX530 3D accelerator present onboard. This would enable faster computation. and then implementing the stereo vision algorithm on the GPU. Finally if time permits, this project will be added to the BeagleBone Blue APIs.

Description

The kernel which I would using is 4.4.56-bone17. This version has prebuilt kernel modules omaplfb, tilcdc and pvrsrvkm which are essential for running SGX530. Following are the complete set of project goals and their challenges which I plan to deliver at the end of the tenure:
Utilizing the onboard GPU: BeagleBone Blue/Black has an inbuilt PowerVR SGX530 3D accelerator which is capable of performing graphical processing. By the end of this project, I will create some sample vision programs using the OpenGL ES2 to utilize the GPU and also create appropriate documentation of how to use the SGX530 on the BeagleBone Blue/Black.

Implementing vision algorithms for the camera: One of the most important deliverable of this project is to develop the develop BeagleCV library. BeagleCV is a library forked from the popular real-time CV library libCVD to minimize the complexity as much suited for BeagleBone. There are lot of image operating APIs which can be used for implementing vision algorithms. I will be using these APIs along with OpenGL ES APIs to implement stereo matching algorithm.

Stereo camera support for the BeagleBone: Configuring and utilizing a OV9712 based stereo camera for extracting the Depth of Field (DoF) on the BeagleBone using BeagleCV. This implementation would have a lot of impact on the embedded robotics community. Documentation and examples: I will provide extensive and accurate documentation for whatever I build in this project. Functional documentation for the BeagleCV will be done in doxygen. Code documentation will be as comments in the source file.


(Future work) Adding BeagleCV support for BeagleBone Blue APIs: Once BeagleCV as been implemented and the v1 released, I will add the support to the BeagleBone Blue API repository. This would enable users to implement sensor fusion algorithms that help in robotic localization, tracking, detection and navigation.

Timeline

Google Summer of Code stretches over a period of 12 weeks with the Phase-1, Phase-2 and final evaluations in the 4th, 8th and the 12th week respectively. Following are the timelines and milestones which I want to strictly follow throughout the project tenure:


May 30 - June 13 Week-1,2
Aim: Cleaning, Minimizing and testing bare BeagleCV Description: I created beaglecv from libcvd, a C++ library designed to be easy to use and portable for real time applications. First step is to make the library only command line. All unnecessary stuff like X11, GUI, OpenGL text rendering, imagedisplay etc. would be removed to increase the efficiency of the library. SGX530 will be tested and installation will be documented.


June 14 - June 27 Week-3,4
Aim: Implementing functional OpenGL ES2 APIs for BeagleCV Description: Little support on OpenGL is present in BeagleCV. My job this week is to remove any display related APIs and to increase the computation APIs. Implementing new GL helper functions apart from the original ones (found here) which came with libCVD will also be done in this week. Image would be preprocessed using OpenGL APIs and converted to greyscale after the along with some performance optimization methods suggested by Singhal et al. for image processing on SGX530. These shader techniques help in consuming low memory and processing Floating Point precision control: Loop Unrolling: supported by OpenGL ES2 Branching Load sharing between vertex and fragment shaders Texture compression: Images will be converted to POWERVR Texture Compression (PVRTC) format with a compression ration of 8:1.


July 28 - July 4 Week-5
Aim: Writing examples using functional APIs Description: Example programs will be written which makes use of the coded APIs present in the repository will be tested and any bugs pertaining to this source code will be fixed.

July 5 - July 11 Week-6
Aim: Performance evaluation and documentation of BeagleCV Description: Reserve week for testing examples from the BeagleCV repository and checking the performance of those examples wrt to CPU, GPU and CPU+GPU support. All the functionalities will be properly documented.


July 12 - July 25 Week-7,8
Aim: Block Matching (BM) algorithm using BeagleCV Description: BM algorithm is the most widely used stereo matching algorithm used in the embedded community because of its least processing utility. Other methods like BE, SGM etc. are very costly and require high-end GPUs for real time processing. Bare version of the algorithm would be implemented without any I/O integration.


July 26 - August 8 Week-9,10
Aim: Stereo camera configuration and integration of I/O in the stereo algorithm Description: This is the core of the stereo algorithm. Description: I will be using an OV9712 stereo camera for this project which is v4l2 compatible. For the 2 cameras there will be 2 v4l2 devices which can be used for streaming images.


August 9 - August 15 Week-11
Aim: Testing stereo matching algorithm and checking performance Description: Generating the final disparity map given 2 stereo images. Checking the accuracy and documenting the approach.

August 16 - August 22 Week-12
Aim: FINAL EVALUATION !! Description: Checking for code smells and bugs. Refining the previous documentation so that it is more easy to understand. Checking the final implementation and doing the run-through again.

August 23 onwards: Future work
Aim: Adding BeagleCV with BeagleBone Blue APIs Description: Once BeagleCV is stable, it will be added in the BeagleBone Blue APIs to create support for vision along with other sensors. I will also maintain the BeagleCV library by adding more algorithms and fixing any bugs pertaining to BeagleBone.

Experience and approach

I am a fourth-year undergraduate student studying in India. Besides having a key interest towards networking, robotics and systems-related courses I also like hacking on embedded electronics. I like to work on an open-source project this summer because it is interesting and contributing to the project is fun and exciting. I did not work much on open-source before, but I have some idea about how things work in open-source community which seem to be very fascinating.

Direct SLAM using Unsupervised CNN: Modified LSD-SLAM system, by incorporating Unsupervised CNN for estimating the depth for every frame and then optimizing the pose graph based on the inverse-depth map. I also built a mobile application which uses the camera feed and the real-time metric data from the sensors to generate a dense 3D reconstruction of the surroundings.

Object segmentation and tracking in RGB-D images: Developed a robust segmentation method using deep learning which accurately extracts an object from an RGB-D image and subsequently tracks the object in the RGB-D stream. Currently this is an ongoing project.

Accurate and Augmented Localization and Mapping for Indoor Quadcopters: In this project, a state-estimation system for Quadcopters operating in indoor environment is developed that enables the quadcopter to localize itself on a globally scaled map reconstructed by the system. To estimate the pose and the global map, we use ORB-SLAM, fused with onboard metric sensors along with a 2D LIDAR mounted on the Quadcopter which helps in robust tracking and scale estimation.

Enhancing ORB-SLAM using IMU and Sonar: Increased the accuracy and robustness of ORB-SLAM by integrating Extended Kalman Filter (EKF) by fusing the IMU and sonar measurements. The scale of the map is estimated by a closed form Maximum Likelihood approach.

Semi-Autonomous Quadcopter for Person Following: Developed an IBVS based robotic system, implemented on Parrot AR Drone, which is capable of following a person or any moving object and simultaneously measuring the localized coordinates of the quadcopter, on a scaled map.

API Support for BeagleBone Blue: Created easy-to-use APIs for BeagleBone Blue. With these APIs, applications can be directly ported onto the board. This project was a collaboration of Beagleboard.org with the University of California, San Diego as part of Google Summer of Code 2016.

Constructing Artificial Potential Fields from PD Flow: Worked on constructing Artificial Potential Fields using the Primal-Dual algorithm PD flow, which efficiently calculates the dense Scene flow of an RGB-D camera.

Intelligent Parking System for Autonomous Robot: Using BeagleBone Black as an onboard microcontroller, the robot finds the park set-point by matching features using SURF descriptors on the template image and directs the actuators connected to PRU (Programmable Real-time Unit).

Intelligent Parking system: This module is a part of ADS(Autonomous Driving System) used for accurate autonomous parking. The BeagleBone Black in the robot finds the set point by matching features using SURF descriptors on the template image and directs the output to the actuators(motors) connected to PRU(Programmable Real-time Unit).

I plan all my work properly and sketch out a perfect routine so that the work planned gets completed within the given time. I always sketch out priorities and keep priority management above time management. My policy is: “Hard-work beats talent when talent doesn't work hard !!”. I strongly feel that striving to know something is the best way to learn something. I can assure that I will work around 50-55 hours a week without any other object of interest. I also hope for lot of learning experience throughout the program and come closer to the open-source world.

Contingency

If I get stuck on my project and I don’t find my mentor, I will google the error and research about it myself. I personally feel that there is nothing in this world which is not present on the internet. I will also try take help from the other developers present on the IRC.

Benefit

kiran4399: As a robotics researcher, I personally feel that just by using the data from low-level sensors. By developing these BeagleCV, its functionalities and applications, students be greatly benefited can apply many high-level concepts like visual tracking, localization, detection, pose estimation etc. By adding BeagleCV to BeagleBone APIs, it would be very easy to implement sensor-fusion algorithms. roject.

Suggestions

I plan all my work properly and sketch out a perfect routine so that the work planned gets completed within the given time. I always sketch out priorities and keep priority management above time management. My policy is: “Hard-work beats talent when talent doesn't work hard !!”. I strongly feel that striving to know something is the best way to learn something. I can assure that I will work around 50-55 hours a week without any other object of interest. I also hope for lot of learning experience throughout the program and come closer to the open-source world.