< BeagleBoard‎ | GSoC
Revision as of 09:43, 31 March 2020 by Anirudh666 (talk | contribs) (Benefit)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Proposal for Emotions Recognition in School Kids using Deep Learning and NLP with BeagleBone Black

Student: Anirudh Sivakumar


This project is currently just a proposal.


Completed all the requirements listed on the ideas page. The code for the task can be found in the Github repository here submitted through the pull request #143 generated in Github.

About you

IRC: Anirudh666
Github: Anirudh666
School: Manipal Institute of Technology, Manipal
Country: India
Primary language : English, Tamil
Typical work hours : 7AM to 9PM IST

About your project

Project name: Emotions Recognition in School Kids using Deep Learning and NLP with BeagleBone Black


School kids are often unrecognized in many of the Asian educational systems. They carry various emotions, right from parent’s fights to getting bullied. Kids often make intangible opinions about their life and future lifestyle based on experienced perception, thus impacting their carrier. The education system of many countries advice to identify such students and make a personal peer to peer counseling system. Apart from human intervention in decision making, we are looking forward to creating a trained algorithm that can identify such induced emotions and identify students who aren’t approachable nor identified by teachers.

Natural Language Processing (NLP) using deep learning algorithm has proven to be the best fit for decision making over human intelligence in speech recognition. A deep learning algorithm is always built and trained over time to make it best fit for small range applications like a mobile app or a desktop executable file.

The project deals in three stages. The first stage includes the use of beaglebone black and a microphone, which is capable enough to record interview voices of students from various schools in India, and store the data via the cloud. The second stage consists of storing data to be fed to a deep learning algorithm and keep training until it identifies as a best-fit percentage and the third stage deals with the design and development of a mobile application to be remotely used and to test the efficiency of the trained algorithm. The idea behind the project is to identify the emotions of the school kids and their social behavior. This helps the teachers to make more efficient peer to peer counseling. The outcome of the project would be long term research over speech recognition using NLP and Deep learning and help scientific society with best-fit speech samples for future R&D in economically slow countries.


Provide a development timeline with a milestone each of the 11 weeks and any pre-work. (A realistic timeline is critical to our selection process.)

Mar 31 Proposal complete, Submitted to
May 4 Proposal accepted or rejected
May 18 Pre-work complete, Coding officially begins!
May 25 Finalise Hardware and design and develop the PCB for the system, Introductory YouTube video
June 1 Setup a lab trial run with basic code
June 8 Interface system with the cloud storage
June 15 18:00 UTC Start building deep learning algorithm with MATLAB and Python and check the working of both these algorithms, Mentors and students can begin submitting Phase 1 evaluations
June 19 18:00 UTC Phase 1 Evaluation deadline
June 22 Visit nearby schools and interview the students to obtain data for training and developing the deep learning algorithm
June 29 Finish training the algorithm for the first time
July 6 Revisit the schools to collect more data and train the algorithm for the second time
July 13 18:00 UTC See the improvement in the trained algorithm with the students data, Mentors and students can begin submitting Phase 2 evaluations
July 17 18:00 UTC Phase 2 Evaluation deadline
July 20 Repeat the algorithm training process if required, depending on the performance of the system
July 27 Start looking into building the mobile application and integration of the 2 systems
August 3 Finalise the working of the mobile application and test it on a few more students through the mobile application, to check the integrity of the system, Completion YouTube video
August 10 - 17 18:00 UTC Final week: Students submit their final work product and their final mentor evaluation
August 17 - 24 18:00 UTC Mentors submit final student evaluations

Experience and approach

I am a 3rd-year Mechatronics student at Manipal Institute of Technology, Manipal. I have worked on multiple projects in the past and I am also currently working on a few projects. My work experience is as follows:

Formula Manipal (Sep 2017- Aug 2019): I was part of a student project team called Formula Manipal for 2 years, where I worked on designing the electrical and electronics system for the electric vehicle. I also designed an in house battery management system for the Lithium-ion battery pack, using LTC6804 ICs and made a specific PCB for each slave. The system consisted of 12 slaves and 1 master, and they communicated through iso-SPI bus. This system monitored the entire battery pack which consisted of 84 cells and a total voltage of 350volts, the system monitored current, voltage and temperature of the battery pack to make sure for safe working of it. I had also worked on a motor control system, for which we used the Rinehart PM100 DZ, which was used to control the Emrax 208, PMS motor present in the car. I also designed the Data acquisition system for the car which worked on CAN protocol and consisted of a Beagle Bone Black master and 4 STM32 slaves. The entire system collected various data from all the sensors placed through the car and also from the motor controller and the battery management system. I also designed the entire safety system of the electric vehicle which consisted of a total of 4 separate systems and a dedicated PCB for each one. These safety systems each have their own function and make sure the entire car is safe at all times. Designed separate dedicated PCBs for each system.

CAIR (DRDO) (May 2019 – July 2019): I was an intern at CAIR (DRDO) in the summer of 2019. I was interning at the robotics department where I helped in designing the navigation system for an Indo-US Hex-copter project. I helped in designing the sensor network for the copter, which consisted of 6 maxbotics MB122 ultrasonic sensors which I had to interface to the Jetson TX1, master computer through Robotics Operating System (ROS), to develop the cost map.

loopMIT (Sep 2019 – present): I am currently part of a student project team called loopMIT, where I am the co-founder and the Head of electronics and propulsion subsystem. We are currently working on designing a hyperloop pod that propels with the help of a Linear induction motor and can also levitate with the help of passive levitation. I am working on designing the electronics system of the pod, which includes the Data acquisition system, battery management system, the motor controller, flight and brake controller. Currently working on the design of the testing rigs for the levitation and lateral control module, and also in designing the linear induction motor and to design a system that could help us in braking using the LIM. We are set to participate in the SpaceX Hyperloop pod competition 2021.

IICDC 2019 (Oct 2019 - Present): I am also currently in the semi-finals of IICDC 2019 with the project E_agri, which is a smart agricultural system based on IoT. I am working on developing the sink node, which consists of a microcontroller, that gets data wirelessly from all the sensor nodes placed throughout a farm and sends these data to the cloud, where they can be processed and valuable data can be displayed for easy access for the farmer. The system thus consists of multiple sensor nodes placed throughout the farm and a few sink nodes that collect data from these sensor nodes and send them to the cloud.

I am also set to present a research paper at the international conference of ACTSE 2020 and I am currently working on another research paper based on deep learning. Since I have worked with BeagleBone Black before, I will start with designing the hardware part of the circuit, and make a dedicated PCB for it. Then I will start with the coding for the deep learning algorithm, using Python and MATLAB.


If I do get stuck on a problem, I believe that I will first start looking into all the systems that are linked to that problem and I will read about those systems in detail and see if I can find other solutions online, so I can get back to the system with a better approach. Since there are a plethora of resources online, about the BeagleBone controllers, Neural Networks and Deep learning, I will be able to find a solution to the problem. If, after multiple attempts, if I am still not able to correct the system, I will approach one of my faculties at my university, who has worked in this field of embedded systems and deep learning.


If completed, it will help in understanding the emotions of not only students, but all individuals in all fields who are facing problems regarding mental health and are unable to get any help or be identified, and it can help them seek attention. I really hope I can help at least one person through this project and my project is truly complete only then. Since I will be making a mobile application out of this, it will be accessible by everyone. The beagleboard/ #beagle community could also use this for similar means as mentioned above and they could also use this as a base for a more in depth emotions recognition algorithm which can be used in other AI projects of similar means.


The code for the task can be found in the Github repository here submitted through the pull request #143 generated in Github.


Is there anything else we should have asked you?