https://elinux.org/api.php?action=feedcontributions&user=Austinin&feedformat=atomeLinux.org - User contributions [en]2024-03-28T18:31:06ZUser contributionsMediaWiki 1.31.0https://elinux.org/index.php?title=ECE497_Project_-_Object_Detection_w/_DNN&diff=505516ECE497 Project - Object Detection w/ DNN2019-11-20T23:14:47Z<p>Austinin: </p>
<hr />
<div><br />
[[Category:ECE497 |PT]]<br />
[[Category:ECE497Fall2018 |PT]]<br />
<br />
<br />
Team members: [Paul Wilda, Leela Pakanati]<br />
<br />
== Grading Template ==<br />
I'm using the following template to grade. Each slot is 10 points.<br />
0 = Missing, 5=OK, 10=Wow!!<br />
<br />
<pre style="color:red"><br />
00 Executive Summary<br />
00 Installation Instructions <br />
00 User Instructions<br />
00 Highlights<br />
00 Theory of Operation<br />
00 Work Breakdown<br />
00 Future Work<br />
00 Conclusions<br />
00 Demo<br />
00 Late<br />
Comments: I'm looking forward to seeing this.<br />
<br />
Score: 10/100<br />
</pre><br />
<br />
<span style="color:red">(Inline Comment)</span><br />
<br />
== Executive Summary ==<br />
<br />
[[File:WorkingDemo.jpg|400px|thumb|none|left|Picture of fully functional in-class demo]]<br />
<br />
We are using tensor flow and Open-CV to detect items in the frame of a web camera. The camera is mounted onto a tilt pan kit to allow us to track the objects in frame as well. Due to the intensive nature of the object detection, we are using a local computation server to process the image and find the objects within it. The computation server returns a processed image and error vector which the Pi coverts to a control vector. It can then display the processed image and adjust its angle to keep the tracked object in the middle of the frame. In order to dramatically decrease the complexity of the project, we would have liked to preform all the processing on the Pi as well however we were unable to get a reasonable response time with either the Pi or the Beagle Bone. The Raspberry Pi takes at least 3 seconds per image to process and the BeagleBone Black at least 5 seconds.<br />
<br />
== Packaging ==<br />
In the spirit of small build, big execution, we created an enclosure for our project using mdf. We CNC'd two pieces that were then glued together, sanded and painted. A notch was cut into the back for the cables to access the Raspberry Pi that was mounted on the underside. The Pi was mounted using 4x 3M plastic standoffs and some 3M screws. The tilt pan kit was mounted on the top piece and a hole was drilled through to allow the servo motor cables access to the Pi.<br />
<br />
Below you can see pictures of the assembly: <br />
<br />
<gallery><br />
Side.jpg|Side of system<br />
Front.jpg|Front of system<br />
Sauron_Underside.jpg|Underside of system<br />
</gallery><br />
<br />
== Installation/User Instructions ==<br />
<br />
These are step by step instructions on how to install and run this project. <br />
<br />
* You can find our GitHub page at the following link: [https://github.com/LeelaPakanati/ECE434_Sauron.git https://github.com/LeelaPakanati/ECE434_Sauron.git]. <br />
<br />
=== Install Requirements ===<br />
<br />
* In order to run this project use the install_host.sh on the host machine and install_pi.sh to automatically install all the requirements. A full list of the installations can be found in the README.md<br />
<br />
=== Setup ===<br />
<br />
==== IP Address Setup ====<br />
<br />
*First get the IP addresses of both the compute server and the SBC client. Note if these devices are not on the same local area network, the server must be globally port forwarded to allow for access over the network.<br />
*Upon doing so, enter the server's IP address in eye.py for the value of server_ip. And similarily enter the SBC client's IP in tower.py for the value of client_ip.<br />
<br />
==== Running ==== <br />
<br />
* Run ./tower.py <object to track> on the host<br />
* Run ./eye.py on the Pi<br />
<br />
== Theory of Operation ==<br />
<br />
<br />
=== Hardware ===<br />
<br />
[[File:Fritzing Diagram.png|400px|thumb|none|left|Schematic]]<br />
<br />
=== Software ===<br />
<br />
[[File:High level diagram.png|400px|thumb|none|left|High-level Hardware Overview]]<br />
<br />
<br />
The camera sends the image to the Raspberry Pi over usb. The Pi then sends the image to the web server. The web server processes the image, finds the nearest person it has the highest confidence for, and returns an error vector to the pi of the distance between the identified object and the center of the frame. Using a PID control loop, this error vector is processed by the Pi and converted into a control vector. Finally, the control vector is then turned into PWM signals that are sent to each servo.<br />
<br />
This whole process takes anywhere from 100-130 ms. Our greatest bottleneck in this process is the time it take to transfer the image to and from the web server. Despite the delays inherent to file transfer over the internet, this is still significantly faster than trying to do all the processing on the Pi or the Beagle bone. Due to these hardware limitations of the Pi and Beagle bone it took nearly 3 sec on the Pi, and 6 sec on the Beagle to process a single image. <br />
<br />
In an effort to further optimize our system we used both cores on the Pi to parallelize some of the tasks. Currently, the image transfer and display is handled on one core, while the control loop runs on the other. This was done to help preserve the timing of the control loop as well as take some load off off the core that was handling the image.<br />
<br />
== Highlights: ==<br />
<br />
*The project uses OpenCV and tensorflow Models for object detection<br />
*The servos are able to accurately track a person, even if it loses track of them temporarily due to the integrator factor of the PID loop<br />
*The display classifies all objects, regardless of the one its tracking so you can see all of its detection<br />
*The project uses a rudimentary form of cloud computing for the Neural Network detection<br />
*The project uses a TCP socket for reliable communication.<br />
<br />
== Work Breakdown: ==<br />
<br />
*Getting OpenCV on Pi/Beaglebone (10/28) - Leela<br />
*Testing Pi vs Beaglebone operation (10/29) - Both<br />
*Image sending and receiving (11/5) - Leela<br />
*Web server configuration (11/5) - Leela<br />
*Servo and tilt pan kit assembly (11/10) - Paul<br />
*Control loop and tuning for servos (11/14) - Paul<br />
*Enclosure design and construction (11/16) - Paul<br />
*Documentation (11/19) - Paul<br />
<br />
== Future Work ==<br />
<br />
*Creating our own libraries to train our model on would be a very interesting addition to this project. This would allow us to detect and recognize individuals and only track certain people. <br />
<br />
*Making the tilt pan kit more robust would allow us to mount a nicer camera to the system and would significantly improve the image quality as well as the recognition accuracy.<br />
<br />
*Since we are mostly only tracking a single image it would be interesting to look into other object detection algorithms. About 40ms of the delay we are experiencing is from the processing of the image. If, instead of analyzing the whole image to find all the objects, we only searched blobs of the image we could significantly decrease the amount of computation needed. This could perhaps even allow us to run all the computation on the Pi/Beagle<br />
<br />
== Conclusions ==<br />
<br />
This was a very interesting project overall that introduced a lot of new concepts that neither of us had any experience with. We ran into some difficulties and road blocks largely due to the hardware limitation of the Pi, but this was a good starting point and it gives us a lot to improve on if we decide to continue to improve the system.</div>Austininhttps://elinux.org/index.php?title=ECE497_Project_-_Object_Detection_w/_DNN&diff=505511ECE497 Project - Object Detection w/ DNN2019-11-20T23:08:08Z<p>Austinin: /* Milestones: */</p>
<hr />
<div><br />
[[Category:ECE497 |PT]]<br />
[[Category:ECE497Fall2018 |PT]]<br />
<br />
<br />
Team members: [Paul Wilda, Leela Pakanati]<br />
<br />
== Grading Template ==<br />
I'm using the following template to grade. Each slot is 10 points.<br />
0 = Missing, 5=OK, 10=Wow!!<br />
<br />
<pre style="color:red"><br />
00 Executive Summary<br />
00 Installation Instructions <br />
00 User Instructions<br />
00 Highlights<br />
00 Theory of Operation<br />
00 Work Breakdown<br />
00 Future Work<br />
00 Conclusions<br />
00 Demo<br />
00 Late<br />
Comments: I'm looking forward to seeing this.<br />
<br />
Score: 10/100<br />
</pre><br />
<br />
<span style="color:red">(Inline Comment)</span><br />
<br />
== Executive Summary ==<br />
<br />
[[File:WorkingDemo.jpg|400px|thumb|none|left|Picture of fully functional in-class demo]]<br />
<br />
We are using tensor flow and Open-CV to detect items in the frame of a web camera. The camera is mounted onto a tilt pan kit to allow us to track the objects in frame as well. Due to the intensive nature of the object detection, we are using a local computation server to process the image and find the objects within it. The computation server returns a processed image and error vector which the Pi coverts to a control vector. It can then display the processed image and adjust its angle to keep the tracked object in the middle of the frame. In order to dramatically decrease the complexity of the project, we would have liked to preform all the processing on the Pi as well however we were unable to get a reasonable response time with either the Pi or the Beagle Bone. The Raspberry Pi takes at least 3 seconds per image to process and the BeagleBone Black at least 5 seconds.<br />
<br />
== Packaging ==<br />
In the spirit of small build, big execution, we created an enclosure for our project using mdf. We CNC'd two pieces that were then glued together, sanded and painted. A notch was cut into the back for the cables to access the Raspberry Pi that was mounted on the underside. The Pi was mounted using 4x 3M plastic standoffs and some 3M screws. The tilt pan kit was mounted on the top piece and a hole was drilled through to allow the servo motor cables access to the Pi.<br />
<br />
Below you can see pictures of the assembly: <br />
<br />
<gallery><br />
Side.jpg|Side of system<br />
Front.jpg|Front of system<br />
Sauron_Underside.jpg|Underside of system<br />
</gallery><br />
<br />
== Installation/User Instructions ==<br />
<br />
These are step by step instructions on how to install and run this project. <br />
<br />
* You can find our GitHub page at the following link: [https://github.com/LeelaPakanati/ECE434_Sauron.git https://github.com/LeelaPakanati/ECE434_Sauron.git]. <br />
<br />
=== Install Requirements ===<br />
<br />
* In order to run this project use the install_host.sh on the host machine and install_pi.sh to automatically install all the requirements. A full list of the installations can be found in the README.md<br />
<br />
=== Setup ===<br />
<br />
==== IP Address Setup ====<br />
<br />
*First get the IP addresses of both the compute server and the SBC client. Note if these devices are not on the same local area network, the server must be globally port forwarded to allow for access over the network.<br />
*Upon doing so, enter the server's IP address in eye.py for the value of server_ip. And similarily enter the SBC client's IP in tower.py for the value of client_ip.<br />
<br />
==== Running ==== <br />
<br />
* Run ./tower.py <object to track> on the host<br />
* Run ./eye.py on the Pi<br />
<br />
== Theory of Operation ==<br />
<br />
<br />
=== Hardware ===<br />
<br />
[[File:Fritzing Diagram.png|400px|thumb|none|left|Schematic]]<br />
<br />
=== Software ===<br />
<br />
[[File:High level diagram.png|400px|thumb|none|left|High-level Hardware Overview]]<br />
<br />
<br />
The camera sends the image to the Raspberry Pi over usb. The Pi then sends the image to the web server. The web server processes the image, finds the nearest person it has the highest confidence for, and returns an error vector to the pi of the distance between the identified object and the center of the frame. Using a PID control loop, this error vector is processed by the Pi and converted into a control vector. Finally, the control vector is then turned into PWM signals that are sent to each servo.<br />
<br />
This whole process takes anywhere from 100-130 ms. Our greatest bottleneck in this process is the time it take to transfer the image to and from the web server. Despite the delays inherent to file transfer over the internet, this is still significantly faster than trying to do all the processing on the Pi or the Beagle bone. Due to these hardware limitations of the Pi and Beagle bone it took nearly 3 sec on the Pi, and 6 sec on the Beagle to process a single image. <br />
<br />
In an effort to further optimize our system we used both cores on the Pi to parallelize some of the tasks. Currently, the image transfer and display is handled on one core, while the control loop runs on the other. This was done to help preserve the timing of the control loop as well as take some load off off the core that was handling the image.<br />
<br />
=== Work Breakdown: ===<br />
<br />
*Getting OpenCV on Pi/Beaglebone (10/28) - Leela<br />
*Testing Pi vs Beaglebone operation (10/29) - Both<br />
*Image sending and receiving (11/5) - Leela<br />
*Web server configuration (11/5) - Leela<br />
*Servo and tilt pan kit assembly (11/10) - Paul<br />
*Control loop and tuning for servos (11/14) - Paul<br />
*Enclosure design and construction (11/16) - Paul<br />
*Documentation (11/19) - Paul<br />
<br />
== Future Work ==<br />
<br />
*Creating our own libraries to train our model on would be a very interesting addition to this project. This would allow us to detect and recognize individuals and only track certain people. <br />
<br />
*Making the tilt pan kit more robust would allow us to mount a nicer camera to the system and would significantly improve the image quality as well as the recognition accuracy.<br />
<br />
*Since we are mostly only tracking a single image it would be interesting to look into other object detection algorithms. About 40ms of the delay we are experiencing is from the processing of the image. If, instead of analyzing the whole image to find all the objects, we only searched blobs of the image we could significantly decrease the amount of computation needed. This could perhaps even allow us to run all the computation on the Pi/Beagle<br />
<br />
== Conclusions ==<br />
<br />
This was a very interesting project overall that introduced a lot of new concepts that neither of us had any experience with. We ran into some difficulties and road blocks largely due to the hardware limitation of the Pi, but this was a good starting point and it gives us a lot to improve on if we decide to continue to improve the system.</div>Austininhttps://elinux.org/index.php?title=ECE497_Project_-_Object_Detection_w/_DNN&diff=505506ECE497 Project - Object Detection w/ DNN2019-11-20T23:07:45Z<p>Austinin: /* Milestones: */</p>
<hr />
<div><br />
[[Category:ECE497 |PT]]<br />
[[Category:ECE497Fall2018 |PT]]<br />
<br />
<br />
Team members: [Paul Wilda, Leela Pakanati]<br />
<br />
== Grading Template ==<br />
I'm using the following template to grade. Each slot is 10 points.<br />
0 = Missing, 5=OK, 10=Wow!!<br />
<br />
<pre style="color:red"><br />
00 Executive Summary<br />
00 Installation Instructions <br />
00 User Instructions<br />
00 Highlights<br />
00 Theory of Operation<br />
00 Work Breakdown<br />
00 Future Work<br />
00 Conclusions<br />
00 Demo<br />
00 Late<br />
Comments: I'm looking forward to seeing this.<br />
<br />
Score: 10/100<br />
</pre><br />
<br />
<span style="color:red">(Inline Comment)</span><br />
<br />
== Executive Summary ==<br />
<br />
[[File:WorkingDemo.jpg|400px|thumb|none|left|Picture of fully functional in-class demo]]<br />
<br />
We are using tensor flow and Open-CV to detect items in the frame of a web camera. The camera is mounted onto a tilt pan kit to allow us to track the objects in frame as well. Due to the intensive nature of the object detection, we are using a local computation server to process the image and find the objects within it. The computation server returns a processed image and error vector which the Pi coverts to a control vector. It can then display the processed image and adjust its angle to keep the tracked object in the middle of the frame. In order to dramatically decrease the complexity of the project, we would have liked to preform all the processing on the Pi as well however we were unable to get a reasonable response time with either the Pi or the Beagle Bone. The Raspberry Pi takes at least 3 seconds per image to process and the BeagleBone Black at least 5 seconds.<br />
<br />
== Packaging ==<br />
In the spirit of small build, big execution, we created an enclosure for our project using mdf. We CNC'd two pieces that were then glued together, sanded and painted. A notch was cut into the back for the cables to access the Raspberry Pi that was mounted on the underside. The Pi was mounted using 4x 3M plastic standoffs and some 3M screws. The tilt pan kit was mounted on the top piece and a hole was drilled through to allow the servo motor cables access to the Pi.<br />
<br />
Below you can see pictures of the assembly: <br />
<br />
<gallery><br />
Side.jpg|Side of system<br />
Front.jpg|Front of system<br />
Sauron_Underside.jpg|Underside of system<br />
</gallery><br />
<br />
== Installation/User Instructions ==<br />
<br />
These are step by step instructions on how to install and run this project. <br />
<br />
* You can find our GitHub page at the following link: [https://github.com/LeelaPakanati/ECE434_Sauron.git https://github.com/LeelaPakanati/ECE434_Sauron.git]. <br />
<br />
=== Install Requirements ===<br />
<br />
* In order to run this project use the install_host.sh on the host machine and install_pi.sh to automatically install all the requirements. A full list of the installations can be found in the README.md<br />
<br />
=== Setup ===<br />
<br />
==== IP Address Setup ====<br />
<br />
*First get the IP addresses of both the compute server and the SBC client. Note if these devices are not on the same local area network, the server must be globally port forwarded to allow for access over the network.<br />
*Upon doing so, enter the server's IP address in eye.py for the value of server_ip. And similarily enter the SBC client's IP in tower.py for the value of client_ip.<br />
<br />
==== Running ==== <br />
<br />
* Run ./tower.py <object to track> on the host<br />
* Run ./eye.py on the Pi<br />
<br />
== Theory of Operation ==<br />
<br />
<br />
=== Hardware ===<br />
<br />
[[File:Fritzing Diagram.png|400px|thumb|none|left|Schematic]]<br />
<br />
=== Software ===<br />
<br />
[[File:High level diagram.png|400px|thumb|none|left|High-level Hardware Overview]]<br />
<br />
<br />
The camera sends the image to the Raspberry Pi over usb. The Pi then sends the image to the web server. The web server processes the image, finds the nearest person it has the highest confidence for, and returns an error vector to the pi of the distance between the identified object and the center of the frame. Using a PID control loop, this error vector is processed by the Pi and converted into a control vector. Finally, the control vector is then turned into PWM signals that are sent to each servo.<br />
<br />
This whole process takes anywhere from 100-130 ms. Our greatest bottleneck in this process is the time it take to transfer the image to and from the web server. Despite the delays inherent to file transfer over the internet, this is still significantly faster than trying to do all the processing on the Pi or the Beagle bone. Due to these hardware limitations of the Pi and Beagle bone it took nearly 3 sec on the Pi, and 6 sec on the Beagle to process a single image. <br />
<br />
In an effort to further optimize our system we used both cores on the Pi to parallelize some of the tasks. Currently, the image transfer and display is handled on one core, while the control loop runs on the other. This was done to help preserve the timing of the control loop as well as take some load off off the core that was handling the image.<br />
<br />
=== Milestones: ===<br />
<br />
*Getting OpenCV on Pi/Beaglebone (10/28) - Leela<br />
*Testing Pi vs Beaglebone operation (10/29) - Both<br />
*Image sending and receiving (11/5) - Leela<br />
*Web server configuration (11/5) - Leela<br />
*Servo and tilt pan kit assembly (11/10) - Paul<br />
*Control loop and tuning for servos (11/14) - Paul<br />
*Enclosure design and construction (11/16) - Paul<br />
*Documentation (11/19) - Paul<br />
<br />
== Future Work ==<br />
<br />
*Creating our own libraries to train our model on would be a very interesting addition to this project. This would allow us to detect and recognize individuals and only track certain people. <br />
<br />
*Making the tilt pan kit more robust would allow us to mount a nicer camera to the system and would significantly improve the image quality as well as the recognition accuracy.<br />
<br />
*Since we are mostly only tracking a single image it would be interesting to look into other object detection algorithms. About 40ms of the delay we are experiencing is from the processing of the image. If, instead of analyzing the whole image to find all the objects, we only searched blobs of the image we could significantly decrease the amount of computation needed. This could perhaps even allow us to run all the computation on the Pi/Beagle<br />
<br />
== Conclusions ==<br />
<br />
This was a very interesting project overall that introduced a lot of new concepts that neither of us had any experience with. We ran into some difficulties and road blocks largely due to the hardware limitation of the Pi, but this was a good starting point and it gives us a lot to improve on if we decide to continue to improve the system.</div>Austininhttps://elinux.org/index.php?title=ECE434_Project_-_BoneBot&diff=504826ECE434 Project - BoneBot2019-11-19T01:58:27Z<p>Austinin: </p>
<hr />
<div>[[Category:ECE497 |PT]]<br />
[[Category:ECE497Fall2018 |PT]]<br />
{{YoderHead}}<br />
<br />
Team members: [[user:Stichtjd|J. Dalton Stichtenoth]] and [[user:Austinin|Isaac Austin]]<br />
<br />
== Grading Template ==<br />
I'm using the following template to grade. Each slot is 10 points.<br />
0 = Missing, 5=OK, 10=Wow!<br />
<br />
<pre style="color:red"><br />
00 Executive Summary<br />
00 Installation Instructions <br />
00 User Instructions<br />
00 Highlights<br />
00 Theory of Operation<br />
00 Work Breakdown<br />
00 Future Work<br />
00 Conclusions<br />
00 Demo<br />
00 Late<br />
Comments: I'm looking forward to seeing this.<br />
<br />
Score: 10/100<br />
</pre><br />
<br />
<span style="color:red">(Inline Comment)</span><br />
<br />
== Executive Summary ==<br />
<br />
[[File:BoneBot.jpg|thumb|The Fritzing Diagram of the BoneBot.]]<br />
<br />
<br />
The purpose of this project is to create a small remote controlled robot using the BeagleBone Black as the core. To make the actual building of the robot as simple as possible, we will be borrowing a chassis from the Rose-Hulman Mechanical Engineering Department. We will also be using phone app Blynk as the user interface to control the robot. It will have an autonomous mode where it uses 2 stationary IR sensors to avoid objects in front of it.<br />
<br />
== Hardware ==<br />
For this project, we borrowed a plexiglass Chassis for the base of it. However, anything that can support the weight of the hardware and can be fitted with 2 DC motors will work. The original scope of the project was to build the framework out of Legos before we found out how expensive Legos are. Besides that, the breadboard, and the Beaglebone, we needed 2 IR sensors, a Battery, a USB WIFI dongle and an L293D H-Bridge. We used KeyesIR sensors because they come built in with potentiometers for range alterations, but if you don't anticipate having to change the values, the KeyesIR can be reduced down to a smaller and less intensive size. <br />
The battery is self explanatory, as we wanted this to be able to move without the constraints of being connected to a laptop through a USB cable. The Battery should be around 5V, with a current over 1 Amp. As long as it meets those requirements, and can connect to the DC Jack port, the battery should be sufficient.<br />
The WIFI dongle is to allow WIFI communication. The implementation of which will be described in the next section. This part can be skipped if your Beaglebone comes with WIFI capabilities.<br />
Lastly on the hardware, we have an L293D H-Bridge. This allows us to control the movement of the DC motors in 2 directions. If we wanted to implement bi-directional movement without an H-Bridge, it would have required much more hardware and made the breadboard exceptionally messy. I would recommend that whatever H-Bridge you decide to use, that you get one with the "D" at the end, as that means that the diodes are built into the IC and don't need to be manually added.<br />
<br />
== Installation Instructions ==<br />
These installation instructions are performed on the Beaglebone Black while running an Ubuntu 18.04 Operating System<br />
<br />
* First, the Blynk-library and bonescript to interface with the Blynk app need to be installed using:<br />
<pre><br />
bone$ sudo npm install -g --unsafe-perm onoff blynk-library<br />
bone$ sudo npm install bonescript<br />
</pre><br />
<br />
* Then, clone the git repository at https://github.com/aisaacn/BoneBot<br />
You'll have to go into drive.js and replace the blynk Authorization code with the one from your instance. Your Blynk instance should consist of a joystick that goes to Virtual pin 0 with the x and y ranging from -100 to 100. It should also have a button that connects to virtual pin 1 to toggle the autonomous mode of the BoneBot.<br />
At this point, the code should run, so all we have to do is get it connected to WIFI and run on startup<br />
To run the program on startup:<br />
<pre><br />
bone$ sudo nano /lib/systemd/BoneBot.service<br />
</pre><br />
Insert the following lines into the file:<br />
<pre><br />
[Unit]<br />
Description=description of code<br />
After=syslog.target network.target<br />
[Service]<br />
Type=simple<br />
ExecStart=/path/to/git/repo/start.sh<br />
Restart=Always<br />
RestartSec=1<br />
[Install]<br />
WantedBy=multi-user.target<br />
</pre><br />
Now that we have the service created, we just need to let the system know it exists and tell it to start working:<br />
<pre><br />
cd /etc/systemd/system/<br />
ln /lib/systemd/BoneBot.service BoneBot.service<br />
sudo systemctl daemon-reload<br />
sudo systemctl start BoneBot.service<br />
sudo systemctl enable BoneBot.service<br />
</pre><br />
After this, the code will start running upon rebooting the Beaglebone. However, that is mostly useless if there isn't WIFI set up for the Bone to connect to. So to set that up:<br />
<br />
* First, you'll need to enter the the connmanctl line using<br />
<code>connmanctl</code><br />
<br />
Once there, the following commands will enable WIFI:<br />
<pre><br />
enable wifi<br />
scan wifi<br />
services (This is to see what WIFI is available to you)<br />
agent on<br />
connect wifi_###########_## (this is the WIFI code displayed when you ran services before)<br />
quit<br />
</pre><br />
<br />
Congratulation! you've now setup your Beaglebone Black to connect to WIFI upon booting up. Restart the bone and run:<br />
<code>systemctl status BoneBot.service</code><br />
to see if you have a working process. It takes a little while for the Bone to connect to WIFI and Blynk, so refresh this a couple times over a few minutes.<br />
<br />
== User Instructions ==<br />
<br />
Assuming all the above instructions were followed correctly and worked for you, the only things needed to run are the blynk app and the battery. Plug the battery into the DC Jack on the Beaglebone and wait a few minutes. Eventually you should get a notification on the blynk app that tells you that you have connected to the Bone, at which point moving the joystick forward will move the BoneBot forward. The same holds true for reverse, right, and left. If the button is pressed, the BoneBot will stop accepting joystick inputs and move forward until it detects an object with the IR sensors, at which point it will move to avoid that object. <br />
<br />
== Highlights ==<br />
Our project does not require any connection to the computer after the setup is done and it will run automatically every time that the Beaglebone is turned on. The interface is simple and intuitive to anyone who played with remote control cars and the like when they were a kid. It can also enter an autonomous mode where it will drive around on its own and avoid obstacles that it finds in its path. Here's an example of that:<br />
<br />
[https://www.youtube.com/watch?v=Qh3nZER-58o BoneBot Demo]<br />
<br />
== Theory of Operation ==<br />
This project only runs off of a single Javascript file called drive.js. While this file does import from the Blynk-library for functionality, there is overly complex code or flowchart system for this project. Just by reading through the drive.js code, you should be able to understand where everything comes from and how to system works.<br />
<br />
== Work Breakdown ==<br />
Isaac Austin:<br />
* Piloted the writing of the final code<br />
* Handled wiring on the breadboard<br />
* Javascript coding on Object avoidance and Blynk Control<br />
<br />
Dalton Stichtenoth:<br />
* WIFI capabilities<br />
* Soldering<br />
* Acquiring parts and materials<br />
* Running a function on startup<br />
* Documentation<br />
<br />
== Future Work ==<br />
<br />
Additional Work that could be added onto this project:<br />
* Currently the wheels turn very slowly, improving the power supply so that the Bot moved at a faster pace would improve the project<br />
* The movement is limited to 4 directions, Straight, Backwards, turn right, and turn left. Using PRU GPIO and PWM pulses to vary the motor speed and add additional directions of movement to the project<br />
* Wifi, and subsequently the Blynk application, takes a long time to connect upon boot. Improving that functionality somehow so that it works right after booting up<br />
* The IR sensors have a limited range of view when it comes to object avoidance. Improving the field of view through rotating the sensor or adding more of them would improve the functionality of the autonomous mode<br />
* Beyond those improvements, this project only utilizes 6 GPIO pins, so adding your own special flair, like a buzzer horn or turn signals, can make it feel more personalized <br />
<br />
<br />
== Conclusions ==<br />
We originally wanted to use more advanced sonar sensors for object avoidance, but due to time and money constraints, we settled with the less effective IR sensors. Implementing PWM using PRU GPIO and finding a way to increase the speed of the motor rotation would have been preferable, but the current state of the project is one we are happy with. Getting WIFI and startup functionality working took much longer than we expected, but getting it finished was worth the effort.<br />
<br />
{{YoderFoot}}</div>Austininhttps://elinux.org/index.php?title=ECE434_Project_-_BoneBot&diff=504821ECE434 Project - BoneBot2019-11-19T01:46:04Z<p>Austinin: </p>
<hr />
<div>[[Category:ECE497 |PT]]<br />
[[Category:ECE497Fall2018 |PT]]<br />
{{YoderHead}}<br />
<br />
Team members: [[user:Stichtjd|J. Dalton Stichtenoth]] and [[user:Austinin|Isaac Austin]]<br />
<br />
== Grading Template ==<br />
I'm using the following template to grade. Each slot is 10 points.<br />
0 = Missing, 5=OK, 10=Wow!<br />
<br />
<pre style="color:red"><br />
00 Executive Summary<br />
00 Installation Instructions <br />
00 User Instructions<br />
00 Highlights<br />
00 Theory of Operation<br />
00 Work Breakdown<br />
00 Future Work<br />
00 Conclusions<br />
00 Demo<br />
00 Late<br />
Comments: I'm looking forward to seeing this.<br />
<br />
Score: 10/100<br />
</pre><br />
<br />
<span style="color:red">(Inline Comment)</span><br />
<br />
== Executive Summary ==<br />
<br />
[[File:BoneBot.jpg|thumb|The Fritzing Diagram of the BoneBot.]]<br />
<br />
<br />
The purpose of this project is to create a small remote controlled robot using the BeagleBone Black as the core. To make the actual building of the robot as simple as possible, we will be borrowing a chassis from the Rose-Hulman Mechanical Engineering Department. We will also be using phone app Blynk as the user interface to control the robot. It will have an autonomous mode where it uses 2 stationary IR sensors to avoid objects in front of it.<br />
<br />
== Hardware ==<br />
For this project, we borrowed a plexiglass Chassis for the base of it. However, anything that can support the weight of the hardware and can be fitted with 2 DC motors will work. The original scope of the project was to build the framework out of Legos before we found out how expensive Legos are. Besides that, the breadboard, and the Beaglebone, we needed 2 IR sensors, a Battery, a USB WIFI dongle and an L293D H-Bridge. We used KeyesIR sensors because they come built in with potentiometers for range alterations, but if you don't anticipate having to change the values, the KeyesIR can be reduced down to a smaller and less intensive size. <br />
The battery is self explanatory, as we wanted this to be able to move without the constraints of being connected to a laptop through a USB cable. The Battery should be around 5V, with a current over 1 Amp. As long as it meets those requirements, and can connect to the DC Jack port, the battery should be sufficient.<br />
The WIFI dongle is to allow WIFI communication. The implementation of which will be described in the next section. This part can be skipped if your Beaglebone comes with WIFI capabilities.<br />
Lastly on the hardware, we have an L293D H-Bridge. This allows us to control the movement of the DC motors in 2 directions. If we wanted to implement bi-directional movement without an H-Bridge, it would have required much more hardware and made the breadboard exceptionally messy. I would recommend that whatever H-Bridge you decide to use, that you get one with the "D" at the end, as that means that the diodes are built into the IC and don't need to be manually added.<br />
<br />
== Installation Instructions ==<br />
These installation instructions are performed on the Beaglebone Black while running an Ubuntu 18.04 Operating System<br />
<br />
* First, the Blynk-library and bonescript to interface with the Blynk app need to be installed using:<br />
<pre><br />
bone$ sudo npm install -g --unsafe-perm onoff blynk-library<br />
bone$ sudo npm install bonescript<br />
</pre><br />
<br />
* Then, clone the git repository at https://github.com/aisaacn/BoneBot<br />
You'll have to go into drive.js and replace the blynk Authorization code with the one from your instance. Your Blynk instance should consist of a joystick that goes to Virtual pin 0 with the x and y ranging from -100 to 100. It should also have a button that connects to virtual pin 1 to toggle the autonomous mode of the BoneBot.<br />
At this point, the code should run, so all we have to do is get it connected to WIFI and run on startup<br />
To run the program on startup:<br />
<pre><br />
bone$ sudo nano /lib/systemd/BoneBot.service<br />
</pre><br />
Insert the following lines into the file:<br />
<pre><br />
[Unit]<br />
Description=description of code<br />
After=syslog.target network.target<br />
[Service]<br />
Type=simple<br />
ExecStart=/path/to/git/repo/start.sh<br />
Restart=Always<br />
RestartSec=1<br />
[Install]<br />
WantedBy=multi-user.target<br />
</pre><br />
Now that we have the service created, we just need to let the system know it exists and tell it to start working:<br />
<pre><br />
cd /etc/systemd/system/<br />
ln /lib/systemd/BoneBot.service BoneBot.service<br />
sudo systemctl daemon-reload<br />
sudo systemctl start BoneBot.service<br />
sudo systemctl enable BoneBot.service<br />
</pre><br />
After this, the code will start running upon rebooting the Beaglebone. However, that is mostly useless if there isn't WIFI set up for the Bone to connect to. So to set that up:<br />
<br />
* First, you'll need to enter the the connmanctl line using<br />
<code>connmanctl</code><br />
<br />
Once there, the following commands will enable WIFI:<br />
<pre><br />
enable wifi<br />
scan wifi<br />
services (This is to see what WIFI is available to you)<br />
agent on<br />
connect wifi_###########_## (this is the WIFI code displayed when you ran services before)<br />
quit<br />
</pre><br />
<br />
Congratulation! you've now setup your Beaglebone Black to connect to WIFI upon booting up. Restart the bone and run:<br />
<code>systemctl status BoneBot.service</code><br />
to see if you have a working process. It takes a little while for the Bone to connect to WIFI and Blynk, so refresh this a couple times over a few minutes.<br />
<br />
== User Instructions ==<br />
<br />
Assuming all the above instructions were followed correctly and worked for you, the only things needed to run are the blynk app and the battery. Plug the battery into the DC Jack on the Beaglebone and wait a few minutes. Eventually you should get a notification on the blynk app that tells you that you have connected to the Bone, at which point moving the joystick forward will move the BoneBot forward. The same holds true for reverse, right, and left. If the button is pressed, the BoneBot will stop accepting joystick inputs and move forward until it detects an object with the IR sensors, at which point it will move to avoid that object. <br />
<br />
== Highlights ==<br />
Our project does not require any connection to the computer after the setup is done and it will run automatically every time that the Beaglebone is turned on. The interface is simple and intuitive to anyone who played with remote control cars and the like when they were a kid. It can also enter an autonomous mode where it will drive around on its own and avoid obstacles that it finds in its path. Here's an example of that:<br />
<br />
Include a [http://www.youtube.com/ YouTube] demo the audio description.<br />
<br />
== Theory of Operation ==<br />
This project only runs off of a single Javascript file called drive.js. While this file does import from the Blynk-library for functionality, there is overly complex code or flowchart system for this project. Just by reading through the drive.js code, you should be able to understand where everything comes from and how to system works.<br />
<br />
== Work Breakdown ==<br />
Isaac Austin:<br />
* Piloted the writing of the final code<br />
* Handled wiring on the breadboard<br />
* Javascript coding on Object avoidance and Blynk Control<br />
<br />
Dalton Stichtenoth:<br />
* WIFI capabilities<br />
* Soldering<br />
* Acquiring parts and materials<br />
* Running a function on startup<br />
* Documentation<br />
<br />
== Future Work ==<br />
<br />
Additional Work that could be added onto this project:<br />
* Currently the wheels turn very slowly, improving the power supply so that the Bot moved at a faster pace would improve the project<br />
* The movement is limited to 4 directions, Straight, Backwards, turn right, and turn left. Using PRU GPIO and PWM pulses to vary the motor speed and add additional directions of movement to the project<br />
* Wifi, and subsequently the Blynk application, takes a long time to connect upon boot. Improving that functionality somehow so that it works right after booting up<br />
* The IR sensors have a limited range of view when it comes to object avoidance. Improving the field of view through rotating the sensor or adding more of them would improve the functionality of the autonomous mode<br />
* Beyond those improvements, this project only utilizes 6 GPIO pins, so adding your own special flair, like a buzzer horn or turn signals, can make it feel more personalized <br />
<br />
<br />
== Conclusions ==<br />
We originally wanted to use more advanced sonar sensors for object avoidance, but due to time and money constraints, we settled with the less effective IR sensors. Implementing PWM using PRU GPIO and finding a way to increase the speed of the motor rotation would have been preferable, but the current state of the project is one we are happy with. Getting WIFI and startup functionality working took much longer than we expected, but getting it finished was worth the effort.<br />
<br />
{{YoderFoot}}</div>Austininhttps://elinux.org/index.php?title=ECE434_Project_-_BoneBot&diff=504781ECE434 Project - BoneBot2019-11-19T00:48:54Z<p>Austinin: </p>
<hr />
<div>[[Category:ECE497 |PT]]<br />
[[Category:ECE497Fall2018 |PT]]<br />
{{YoderHead}}<br />
<br />
Team members: [[user:Stichtjd|J. Dalton Stichtenoth]] and [[user:Austinin|Isaac Austin]]<br />
<br />
== Grading Template ==<br />
I'm using the following template to grade. Each slot is 10 points.<br />
0 = Missing, 5=OK, 10=Wow!<br />
<br />
<pre style="color:red"><br />
00 Executive Summary<br />
00 Installation Instructions <br />
00 User Instructions<br />
00 Highlights<br />
00 Theory of Operation<br />
00 Work Breakdown<br />
00 Future Work<br />
00 Conclusions<br />
00 Demo<br />
00 Late<br />
Comments: I'm looking forward to seeing this.<br />
<br />
Score: 10/100<br />
</pre><br />
<br />
<span style="color:red">(Inline Comment)</span><br />
<br />
== Executive Summary ==<br />
<br />
Picture that summarizes the project.<br />
<br />
The purpose of this project is to create a small remote controlled robot using the BeagleBone Black as the core. To make the actual building of the robot as simple as possible, we will be borrowing a chassis from the Rose-Hulman Mechanical Engineering Department. We will also be using phone app Blynk as the user interface to control the robot. It will have an autonomous mode where it uses 2 stationary IR sensors to avoid objects in front of it.<br />
<br />
== Hardware ==<br />
For this project, we borrowed a plexiglass Chassis for the base of it. However, anything that can support the weight of the hardware and can be fitted with 2 DC motors will work. The original scope of the project was to build the framework out of Legos before we found out how expensive Legos are. Besides that, the breadboard, and the Beaglebone, we needed 2 IR sensors, a Battery, a USB WIFI dongle and an L293D H-Bridge. We used KeyesIR sensors because they come built in with potentiometers for range alterations, but if you don't anticipate having to change the values, the KeyesIR can be reduced down to a smaller and less intensive size. <br />
The battery is self explanatory, as we wanted this to be able to move without the constraints of being connected to a laptop through a USB cable. The Battery should be around 5V, with a current over 1 Amp. As long as it meets those requirements, and can connect to the DC Jack port, the battery should be sufficient.<br />
The WIFI dongle is to allow WIFI communication. The implementation of which will be described in the next section. This part can be skipped if your Beaglebone comes with WIFI capabilities.<br />
Lastly on the hardware, we have an L293D H-Bridge. This allows us to control the movement of the DC motors in 2 directions. If we wanted to implement bi-directional movement without an H-Bridge, it would have required much more hardware and made the breadboard exceptionally messy. I would recommend that whatever H-Bridge you decide to use, that you get one with the "D" at the end, as that means that the diodes are built into the IC and don't need to be manually added.<br />
<br />
== Installation Instructions ==<br />
These installation instructions are performed on the Beaglebone Black while running an Ubuntu 18.04 Operating System<br />
<br />
* First, the Blynk-library and bonescript to interface with the Blynk app need to be installed using:<br />
<pre><br />
bone$ sudo npm install -g --unsafe-perm onoff blynk-library<br />
bone$ sudo npm install bonescript<br />
</pre><br />
<br />
* Then, clone the git repository at https://github.com/aisaacn/BoneBot<br />
<br />
At this point, the code should run, so all we have to do is get it connected to WIFI and run on startup<br />
To run the program on startup:<br />
bone$ '''sudo nano /lib/systemd/BoneBot.service'''<br />
Insert the following lines into the file:<br />
'''[Unit]<br />
Description=description of code'''<br />
<br />
* Include your [https://github.com/ github] path as a link like this to the read-only git site: [https://github.com/MarkAYoder/gitLearn https://github.com/MarkAYoder/gitLearn]. <br />
* Be sure your README.md is includes an up-to-date and clear description of your project so that someone who comes across you git repository can quickly learn what you did and how they can reproduce it.<br />
* Include a Makefile for your code if using C.<br />
* Include any additional packages installed via '''apt'''. Include '''install.sh''' and '''setup.sh''' files.<br />
* Include kernel mods.<br />
* If there is extra hardware needed, include links to where it can be obtained.<br />
<br />
== User Instructions ==<br />
<br />
Once everything is installed, how do you use the program? Give details here, so if you have a long user manual, link to it here.<br />
<br />
== Highlights ==<br />
<br />
Here is where you brag about what your project can do.<br />
<br />
Include a [http://www.youtube.com/ YouTube] demo the audio description.<br />
<br />
== Theory of Operation ==<br />
This project only runs off of a single <br />
<br />
== Work Breakdown ==<br />
Isaac Austin:<br />
* Piloted the writing of the final code<br />
* Handled wiring on the breadboard<br />
* Javascript coding on Object avoidance and Blynk Control<br />
<br />
Dalton Stichtenoth:<br />
* WIFI capabilities<br />
* Soldering<br />
* Acquiring parts and materials<br />
* Running a function on startup<br />
* Documentation<br />
<br />
== Future Work ==<br />
<br />
Additional Work that could be added onto this project:<br />
* Currently the wheels turn very slowly, improving the power supply so that the Bot moved at a faster pace would improve the project<br />
* The movement is limited to 4 directions, Straight, Backwards, turn right, and turn left. Using PRU GPIO and PWM pulses to vary the motor speed and add additional directions of movement to the project<br />
* Wifi, and subsequently the Blynk application, takes a long time to connect upon boot. Improving that functionality somehow so that it works right after booting up<br />
* The IR sensors have a limited range of view when it comes to object avoidance. Improving the field of view through rotating the sensor or adding more of them would improve the functionality of the autonomous mode<br />
* Beyond those improvements, this project only utilizes 6 GPIO pins, so adding your own special flair, like a buzzer horn or turn signals, can make it feel more personalized <br />
<br />
<br />
== Conclusions ==<br />
<br />
Give some concluding thoughts about the project. Suggest some future additions that could make it even more interesting.<br />
<br />
{{YoderFoot}}</div>Austininhttps://elinux.org/index.php?title=ECE434_Project_-_BoneBot&diff=501916ECE434 Project - BoneBot2019-10-18T15:15:01Z<p>Austinin: Created page with "PT PT {{YoderHead}} Team members: Mark A. Yoder == Grading Template == I'm using the following template to g..."</p>
<hr />
<div>[[Category:ECE497 |PT]]<br />
[[Category:ECE497Fall2018 |PT]]<br />
{{YoderHead}}<br />
<br />
Team members: [[user:Yoder|Mark A. Yoder]]<br />
<br />
== Grading Template ==<br />
I'm using the following template to grade. Each slot is 10 points.<br />
0 = Missing, 5=OK, 10=Wow!<br />
<br />
<pre style="color:red"><br />
00 Executive Summary<br />
00 Installation Instructions <br />
00 User Instructions<br />
00 Highlights<br />
00 Theory of Operation<br />
00 Work Breakdown<br />
00 Future Work<br />
00 Conclusions<br />
00 Demo<br />
00 Late<br />
Comments: I'm looking forward to seeing this.<br />
<br />
Score: 10/100<br />
</pre><br />
<br />
<span style="color:red">(Inline Comment)</span><br />
<br />
== Executive Summary ==<br />
<br />
Picture that summarizes the project.<br />
<br />
Give two sentence intro to the project.<br />
<br />
Give two sentences telling what works.<br />
<br />
Give two sentences telling what isn't working.<br />
<br />
End with a two sentence conclusion.<br />
<br />
The sentence count is approximate and only to give an idea of the expected length.<br />
<br />
== Packaging ==<br />
If you have hardware, consider [http://cpprojects.blogspot.com/2013/07/small-build-big-execuition.html Small Build, Big Execuition] for ideas on the final packaging.<br />
<br />
== Installation Instructions ==<br />
<br />
Give step by step instructions on how to install your project. <br />
<br />
* Include your [https://github.com/ github] path as a link like this to the read-only git site: [https://github.com/MarkAYoder/gitLearn https://github.com/MarkAYoder/gitLearn]. <br />
* Be sure your README.md is includes an up-to-date and clear description of your project so that someone who comes across you git repository can quickly learn what you did and how they can reproduce it.<br />
* Include a Makefile for your code if using C.<br />
* Include any additional packages installed via '''apt'''. Include '''install.sh''' and '''setup.sh''' files.<br />
* Include kernel mods.<br />
* If there is extra hardware needed, include links to where it can be obtained.<br />
<br />
== User Instructions ==<br />
<br />
Once everything is installed, how do you use the program? Give details here, so if you have a long user manual, link to it here.<br />
<br />
== Highlights ==<br />
<br />
Here is where you brag about what your project can do.<br />
<br />
Include a [http://www.youtube.com/ YouTube] demo the audio description.<br />
<br />
== Theory of Operation ==<br />
<br />
Give a high level overview of the structure of your software. Are you using GStreamer? Show a diagram of the pipeline. Are you running multiple tasks? Show what they do and how they interact.<br />
<br />
== Work Breakdown ==<br />
<br />
List the major tasks in your project and who did what.<br />
<br />
Also list here what doesn't work yet and when you think it will be finished and who is finishing it.<br />
<br />
== Future Work ==<br />
<br />
Suggest addition things that could be done with this project.<br />
<br />
== Conclusions ==<br />
<br />
Give some concluding thoughts about the project. Suggest some future additions that could make it even more interesting.<br />
<br />
{{YoderFoot}}</div>Austininhttps://elinux.org/index.php?title=User:Austinin&diff=498896User:Austinin2019-09-10T15:58:58Z<p>Austinin: </p>
<hr />
<div>ECE434 Student<br />
CS Major<br />
[[Category:ECE497 |UA]]</div>Austinin