BeagleBoard/GSoC/2020Proposal/PrashantDandriyal 2

From eLinux.org
< BeagleBoard‎ | GSoC
Revision as of 12:03, 25 March 2020 by Pradan (talk | contribs) (Demonstration)
Jump to: navigation, search

The page contains the second proposal to the initial project proposal for YOLO Models on x15/AI. The first proposal can be found here

BeagleBoard/GSoC/Proposal : YOLO models on the X15/AI

{{#ev:youtube|Jl3sUq2WwcY||right|BeagleLogic}} The project aims at running YOLO (v2 tiny) model inference on the BeagleBone AI at an improved rate of ~30 FPS by leveraging the on-board hardware accelerators and inference optimisations through TIDL API and TIDL Library. The model is to be converted and imported to suit the API requirements.

Student: Prashant Dandriyal
Mentors: Hunyue Yau
Code: https://github.com/PrashantDandriyal/GSoC2020_YOLOModelsOnTheX15
Wiki: https://elinux.org/BeagleBoard/GSoC/2020Proposal/PrashantDandriyal
GSoC entry

Status

This project is currently just a proposal.

Proposal

Completed the pre-requisites posted on the ideas page and have created a pull request to demonstrate the cross compilation #135.

About you

IRC: pradan
Github: PrashantDandriyal
School: Graphic Era University, Dehradun
Country: India
Primary language: English, Hindi
Typical work hours : 12PM-6PM IST
Previous GSoC participation: None. The aim of participation stays bringing inference to edge devices; bringing the computation to the data than the other way round.

About your project

Project name: YOLO models on the X15/AI

Description

In 10-20 sentences, what are you making, for whom, why and with what technologies (programming languages, etc.)?

The project objectives can be fulfilled by approaching two paths: either optimising the model (path 1) or the inference methods (path 2). We propose to follow both the path with limited scopes, as defined in the upcoming sections. The software stack is shown in the adjacent figure

Figure 1 : Software stack. Derived from [TIDL docs]

Texas Instruments provides an API: TIDL (Texas Instruments Deep Learning) which is to be used in either of the paths. For path 1, we intend to use the API for converting the darknet-19 based YOLO v2-tiny model into the intermediate format accepted by TIDL. We target only the YOLO v2-tiny model over the YOLO v3-tiny models as currently, not all layers are supported by the API. Also, the v2 model fits into the on-chip memory of the BeagleBone AI and the x15. The v2 is available in MobileNet and Caffe versions, both of which are supported for model import by TIDL. The Python implementation for a similar model import is implemented as follows:

python "tensorflow\python\tools\optimize_for_inference.py"  --input=mobilenet_v1_1.0_224_frozen.pb  --output=mobilenet_v1_1.0_224_final.pb --input_names=input  --output_names="MobilenetV1/Predictions/Softmax"

The model import will certainly produce satisfactory results, using the techniques employed in path 2 (discussed later). Post model-import, we leverage some optimisations on the converted model (.bin file):

  • Efficient CNN configuration through automatic layer combination during import process.
  • Introducing Sparsity and Quantization in models

The techniques are brought in using the 'configuration file' used during every model import.

For Path 2, we use the TIDL library (a part of TIDL API) to modify how the inference is made. The BeagleBone AI offers 4 Embedded Vision Engines (EVEs) and 2 C66x DSPs which help accelerate the frame processing by using multicore operation (with ~64MB memory buffer per core). Using these cores allows us to

  • Distribute the overload per frame using 'double buffering'
  • Distribute frames among cores
  • Distribute distribute network overload of each frame (better known as 'layer grouping')

We use these techniques in 2 approaches: 'approach_1' and 'approach_2' as explained in the demonstration section.

Demonstration

This section contains details of the demo created by me, to highlight the programming model of the proposed project.

1) Approach 1: One _Execution Object_ (EO) per frame with (Only EVEs)

Process 1 frame per EO or 1 per EOP (4 EVEs and 2 DSPs). This means 6 frames per EO. Above mentioned demo uses 2 EVEs + 2 DSPs (4 EOs) but not for distibuting frames but for layer grouping. Hence, the overall effect is that of a single frame at a time. This method doesn't leverage the layer grouping. The expected performance is 6x (10ms+2ms API overhead). The method is memory intensive beacause each EO is alloted input and output buffer individually. The source code is developed assuming pre-processed input data is available. In all other cases, OpenCV tools are readily available to do so.

Source Code: aproach_1

Network heap size : `64MB/EO x 6 EO = 384MB`

2) Approach 2: Two EO per frame using Double Buffering (EVEs+DSPs)

Figure 2 : Pipeline of EOs. Derived from [[1]]

The second approach is similar to the one adopted in the imagenet demo of TIDL, but the DSPs are replaced with additional EVEs. The pipelining used in the demo can be used to understand this approach also. For further detail,refer to the GitHub page of the demo. The TIDL device translation tool assigns layer group ids to layers during the translation process. But if the assignment fails to distribute the layers evenly, we use explicit grouping using the configuration file or the main cpp file. In this, for each frame, the first few layers (preferrably half of them) are grouped to be executed on EVE0 and the remaining half are grouped to run on EVE1. Similarly for the other frame on EVE3 and EVE 4. There are 4 EOs (4 EVEs and 0 DSPs) and 2 EOPs (Each EOP contains a pair of EVE). We process 1 frame per EOP, so 2 frames at a time. A good performance is expected due to the distribution of overload between the EVEs and use double buffering.

Source Code: aproach_2

Timeline

Provide a development timeline with a milestone each of the 11 weeks and any pre-work.

April 27 Pre-Work Community Bonding Period and discussion on the project and resources available.
May 25 Milestone #1
  • Introductory YouTube video
  • Involve with community and mentors and discuss the execution of the project if needed
  • Collect literature related to the TIDL API
June 1 Milestone #2
  • Setup environment for development like TIDL SDK for the AM57x processor
  • Run demos provided in the API on the actual hardware (the BeagleBone AI) to validate the setup
June 19 Milestone #3
  • I have my exams around the first week of June (no exact dates are provided yet).
  • Collect mobilenet and Caffe versions of YOLO v2 models (may stick with just one if it works fine
  • Demonstrate improved performance by running local system inferences and comparing with previous works.
  • Document the process
  • Submit report for Phase 1 Evaluation
June 22 Milestone #4
  • Get reviews from mentors and discuss modifications for the project plan with the mentors
June 29 Milestone #5
  • Port the YOLO models using the model import feature of TIDL and test the performance by experimenting and finding perfect settings for the model parameters required in the configuration file provided during inference
  • Finalise the model to be used after performance results comparison
  • Run first test on BeagleBone board
  • Document the issues/modifications made
July 6 Milestone #6
  • Try and obtain better results using layer grouping
  • Optimise the other parts of the data feeding-pipeline if needed
  • Obtain equivalent model(s) using the model visualizer to cross check merged layers and better network configuration, if needed
  • Distribute frame overload among the 2 EVEs and 2 DSPs and document the performance improvement
July 13 Milestone #7
  • Test on image and video data
  • Gather performance results and compare with previous works
  • Gather training data for training the YOLO model using Caffe-Jacinto framework
  • Plan second evaluation report
July 17-20 Milestone #8
  • Submit second evaluation report
  • Discuss possible improvements with mentors
  • Train custom model using Caffe-Jacinto framework
  • Document the entire process and settings used
July 27 Milestone #9
  • Include different sparsity in the trained models and find best one for our test case
  • Introduce 8 bit and 16 bit quantization in the sparse model and find best setting
August 3 Milestone #10
  • Completion YouTube video
  • Detailed project tutorial
August 10 - 17 Final week
  • Get the final report reviewed by mentors and improvise changes advised
  • Submit final report

Experience and approach

In 5-15 sentences, convince us you will be able to successfully complete your project in the timeline you have described I am familiar with microcontrollers (32 bit) by Texas Instruments since my sophomore year of my Electronics and Communications Engineering. I have been a quarter-finalist in the India Innovation Challenge and Design Contest (IICDC-2018) during which our team was provided with Texas Instruments resources like the CC26X2R1 and TIVA LAUNCHPAD (EK-TM4C123GXL) EVM. I have been studying Machine Learning for about an year now. Mostly, I have used TensorFlow-Keras as the primary API. I have also participated in some of the ML competitions for a better exposure to problems. In my current semester, I have Neural Networks as a credit subject although I am already working on the topic in relation to On-device learning for low-computation-capable devices. I have implemented some simple neural networks in C and can be found in my GitHub account. I have studied digital signal processing as a credit subject and assume that it will also strengthen the foundation of my understanding of the convolutional neural networks used in this project. Regarding the languages, I have good experience of C and C++, both of which are he primary languages needed for this project. I have used C++ for several coding competitions held by Google.

The certainty of the project is validated by several factors. The project is similar to one of the demos provided in the TIDL API. The Single Shot multi-box detector demo uses a similar approach with the following differences:

TIDL SSD demo YOLO v2 example
Input size (768 x 320) Input size (224 x 224)
43 Layers 19 Layers
Upto 20 classes Upto 9418 classes
Uses caffe-Jacinto model Uses Darknet-19 based model

The output of both the examples is similar to SSD. The performance of the TIDL demo can be used to approximate the performance of the YOLO v2-tiny model. With overhead distribution between the EVEs and C66x, the frame processing time is around 170ms which fulfils our target of running an inference at < 1s. The selection of the YOLO v2 (tiny) model has been done after cross-checking the supported neural network layers. The effect of on-board accelerators have been validated by the articles and technical papers by Texas Instruments. Here are the findings of Peka Varis who ran Image Segmentation on the SITARA processor that we aim to use (AM5729); he observed a 30-40% decrese in latency per frame wwhen using the AM5729 when compared to the AM5749 and an improved rate of 45 fps. Further, the use of JacintoNet11 dense and sparse models (both trained on Caffe-Jacinto), further enhances the performances dramatically. We expect to use similar methods and get corresponding performance.

Contingency

What will you do if you get stuck on your project and your mentor isn’t around? The set of resources / go-to-places I would use are as follows:

  • The TIDL documentation and the official guide for the processor SDK is the first-to-go resource. Along with that, TIDL programming model for programming reference, syntax and debugging issues
  • AM572X training series by Texas Instruments will be quite helpful in the beginning
  • TIDL API repository at git.ti
  • E2E forum for any unknown error/issue. I have already reached the experience of an intellectual on the forum.
  • Embedded Linux Classes by Mark A.Yoder on elinux. He even has detailed content on the x15 board.
  • Refer to the communities if the problem is related to BeagleBone boards. I have observed the open-source community (including the Google group) to be quite active.
  • In case the issue is related to the models, I will look for documentation of the related framework and its GitHub issues section if related to the models. The YOLO models have been around for a while now, so the support is quite good by now.

Benefit

If successfully completed, what will its impact be on the BeagleBoard.org community? Include quotes from BeagleBoard.org community members

With the completion of this project, the documentation will act as a beginner's guide to bring AI to the BeaglBone AI and (with some modifications) possibly to other BeagleBone boards too. For developers, the performance results will help in benchmarking the performances of similar edge devices. As the TIDL is still quite young, the observations born out this project will help collect issues and validations. Also, as the niche field of Edge AI is booming exponentially, the next GSoC projects can use the outcomes of this project to carry the work forward in different directions.

With less than 2 percentage point accuracy compromise, sparsification and TI’s EVE-optimized deep-learning network model JacintoNet11, it is possible to improve the inference latency even further. : Peka Varis in his blog
One thing to note, if you are not using the TIDL API for your Vision AI apps, such as by porting over Raspberry Pi OpenCV code, then you are not using the accelerated TIDL hardware.  You're not gaining a thing. : sjmill01 in his article on element14

Misc

Please complete the requirements listed on the ideas page. Provide link to pull request.

Suggestions

Is there anything else we should have asked you?