Difference between revisions of "ECE497 Project Smart Glass"

From eLinux.org
Jump to: navigation, search
(User Instructions)
m
 
(10 intermediate revisions by 2 users not shown)
Line 1: Line 1:
[[Category:ECE497 |Ps]]
+
[[Category:ECE497Fall2016|Ps]]
 
{{YoderHead}}
 
{{YoderHead}}
  
Line 5: Line 5:
  
 
== Grading Template ==
 
== Grading Template ==
=== Draft Feedback ===
 
Good start, but much is missing.  Add pictures of your Smart Glass.
 
 
I look forward to seeing your finished report.
 
 
 
I'm using the following template to grade.  Each slot is 10 points.
 
I'm using the following template to grade.  Each slot is 10 points.
 
0 = Missing, 5=OK, 10=Wow!
 
0 = Missing, 5=OK, 10=Wow!
  
 
<pre style="color:red">
 
<pre style="color:red">
00 Executive Summary
+
09 Executive Summary
00 Installation Instructions  
+
05 Installation Instructions  
00 User Instructions
+
08 User Instructions
00 Highlights
+
09 Highlights
00 Theory of Operation
+
09 Theory of Operation
00 Work Breakdown
+
10 Work Breakdown
00 Future Work
+
10 Future Work
00 Conclusions
+
10 Conclusions
00 Demo
+
10 Demo
00 Late
+
10 Not Late
Comments: I'm looking forward to seeing this.
+
Comments: Nice project.  The report is a bit weak in how to make it work on my system.
 +
I look forward to seeing the painted frame and the motion sensor working.  
  
Score:  10/100
+
Score:  90/100
 
</pre>
 
</pre>
 
<span style="color:red">(Inline Comment)</span>
 
  
 
== Executive Summary ==
 
== Executive Summary ==
Line 36: Line 30:
  
 
== Packaging ==
 
== Packaging ==
 +
<span style="color:red">A front picture is needed.</span>
 +
 
Smart Glass was designed and built with the idea of possible repairs and upgrades in mind. The front facade provides a nice framed look while the rear is all business. The display sitting on the glass can be easily removed with only a few screws. Two screws hold the monitor in place laterally and gravity (combined with a couple of shims) securely holds the monitor in place vertically. This was done to make removal easy, which we did numerous times during not only the build phase but the programming phase as well. <br />
 
Smart Glass was designed and built with the idea of possible repairs and upgrades in mind. The front facade provides a nice framed look while the rear is all business. The display sitting on the glass can be easily removed with only a few screws. Two screws hold the monitor in place laterally and gravity (combined with a couple of shims) securely holds the monitor in place vertically. This was done to make removal easy, which we did numerous times during not only the build phase but the programming phase as well. <br />
 
<br />
 
<br />
Maybe on an unlucky day, the glass gets bumped and shatters. As sad as that would be, replacing it is quite simple. The facade holding the now shattered glass can be removed with the removal of the 8 screws seen around the edges. After this has been done, another 8 screws around the edges of the facade hold the glass in. Once those screws are removed, the facade comes apart, allowing for new glass to be inserted. <br />
+
Maybe on an unlucky day, the glass gets bumped and shatters. As sad as that would be, replacing it is quite simple. The facade holding the now shattered glass can be removed with the removal of the 8 screws seen around the edges. After this has been done, another 8 screws around the edges of the facade hold the glass in. Once those screws are removed, the facade comes apart, allowing for new glass to be inserted.  
 +
<span style="color:red">Good thinking ahead.</span>
 +
<br />
 
<br />
 
<br />
 
These are only a few of the ways in which design and packaging were taken into account. Should anybody need to access internal components, we feel they will be pleased with our overall design.
 
These are only a few of the ways in which design and packaging were taken into account. Should anybody need to access internal components, we feel they will be pleased with our overall design.
Line 46: Line 44:
  
 
'''alarm@alarm$ git clone https://github.com/hazenhamather/SmartGlass'''
 
'''alarm@alarm$ git clone https://github.com/hazenhamather/SmartGlass'''
 +
 +
 +
<span style="color:red">Many details seem to be missing here.</span>
  
 
== User Instructions ==
 
== User Instructions ==
Line 53: Line 54:
 
'''alarm@alarm$ cd ~/SmartGlass/SmartGlass/FlipClock-master'''
 
'''alarm@alarm$ cd ~/SmartGlass/SmartGlass/FlipClock-master'''
  
Next, open TestingAPI.html in Chrome.
+
Next, open TestingAPI.html in Chrome. <span style="color:red">How do I do this?</span>
 +
 
  
 
Done! This file will work on either the BeagleBone or a host computer. In order to move the BeagleBone to display mode a monitor of some kind, run:  
 
Done! This file will work on either the BeagleBone or a host computer. In order to move the BeagleBone to display mode a monitor of some kind, run:  
  
'''startxfce4'''
+
'''startxfce4''' <span style="color:red">Where do I get this?</span>
  
 
This will give the GUI that will allow you to open TestingAPI.html on the bone. Also, be sure to download Chromium to view.  
 
This will give the GUI that will allow you to open TestingAPI.html on the bone. Also, be sure to download Chromium to view.  
Line 65: Line 67:
 
== Highlights ==
 
== Highlights ==
  
The project was created with a strong and durable frame, with a clean and simple design.
+
The highlights of this project are certainly the durable frame combined with the simple user interface that is extremely modular. With the wifi chip hooked up the the BeagleBone, we can push updates, change the look, and even add features as time goes on, all without ever taking the frame off the wall. The usability is incredible and the user experience will significantly improve when full functionality of the Kinect is achieved.  
 +
 
 
[[File:SmartMirrorProgress.jpg|thumbnail]]
 
[[File:SmartMirrorProgress.jpg|thumbnail]]
  
Line 72: Line 75:
 
== Theory of Operation ==
 
== Theory of Operation ==
  
The BeagleBone is running a simple webpage that is constantly making JSON requests to various APIs to keep the interface current. The Bone will also be doing image processing in order to alter the contents of what the user is experiencing.
+
The BeagleBone is running a simple webpage that, every few minutes, makes GET requests to multiple APIs. These APIs are returning JSON elements full of data, allowing us to pick what pieces of data we would like to display. This keeps the interface current. One of the items on the display, the famous quote, is only called 4 times per day due to limited amounts of calls allowed by the provider of the API. At a later date, the BeagleBone will be doing image processing in order to alter the contents of what the user is experiencing via user input through the Kinect.
 +
 
 +
As far as the mirror goes, the film we applied to the glass gives the side with the most light a reflective look. This is the secret to allowing the monitor to project through the backside. Since the inside of the frame is completely dark and the outside is bright, the user will see their reflection while at the same time seeing the display through the glass, giving the appearance of the "magic" mirror, as originally coined by Michael T.
  
 
== Work Breakdown ==
 
== Work Breakdown ==
  
The large milestones we have encountered have been designing the frame, constructing the frame, and setting up our interface combined with motion detection controls. Luke originally had the idea of how to mount the glass into a frame and Hazen took that idea and created a SolidWorks model of what the final product should look like. The provided the group with a Bill of Materials (BOM) that we were able to take to the department in order to get funding. Using that BOM, we gathered the right construction materials and began the building process. Hazen took control of the building process and Luke handled the electrical part of altering our monitor to suit our needs. Currently at this time, Hazen is finishing up the User Interface and Luke is attempting to access the Kinect, hoping to implement simple skeleton tracking for gesture recognition. The Kinect is not working as expected right now and we aim to have that ironed out by Wednesday night (11/9).
+
The large milestones we have encountered have been designing the frame, constructing the frame, and setting up our interface combined with motion detection controls. Luke originally had the idea of how to mount the glass into a frame and Hazen took that idea and created a SolidWorks model of what the final product should look like. Then Hazen provided the group with a Bill of Materials (BOM) which Luke took to the department in order to get funding. Using that BOM, we gathered the right construction materials and began the building process. Hazen took control of the building process and Luke handled the electrical part of altering our monitor to suit our needs. Hazen completed the user interface and handled the web programming and Luke was the backbone in making sure our hardware was always ready to keep moving forward. Both Hazen and Luke are continuing their work on the Kinect in an attempt to mate it and the interface for a complete end user experience.
  
 
== Future Work ==
 
== Future Work ==
Line 82: Line 87:
 
One neat thing we would like to see done at some point is create this same project but run Windows instead. Using Windows for development opens up most (if not all) of the capabilities of the Kinect granted to it by Microsoft. It would make for an easier gesture recognition and also give the ability to do some very complex motion tracking or even facial recognition. Advanced techniques such as the previous could allow the same mirror to be customized for multiple users.
 
One neat thing we would like to see done at some point is create this same project but run Windows instead. Using Windows for development opens up most (if not all) of the capabilities of the Kinect granted to it by Microsoft. It would make for an easier gesture recognition and also give the ability to do some very complex motion tracking or even facial recognition. Advanced techniques such as the previous could allow the same mirror to be customized for multiple users.
  
K== Conclusions ==
+
== Conclusions ==
  
 
The project was a lot more challenging than the team originally anticipated. The challenging of creating a reflective surface allowing light to pass through was difficult, especially at a low price point.  In addition, mounting the glass in a frame was also difficult, the entire mount/frame took about 13 hours to complete.  The Kinect was also another difficult problem.  While getting Skelton Tracking working on a Windows/x86 host was trivial, the problem arose when getting the software to work on ARM systems such as the BBB.  The software that originally was chosen to do skeleton tracking was OpenNI, NiTE, and PrimeSense Sensor Kinect.  The only issue is Apple bought and shut this company down, and their old software is no longer updated.  The ARM package of NiTE had segmentation faults on the BBB, and after countless hours was determined to be unworkable.  A switch to ROS (Robot Operating System), but the switch was not made in time to get tracking working.  What *did* work however, was retrieving the camera and depth map from the Kinect.  
 
The project was a lot more challenging than the team originally anticipated. The challenging of creating a reflective surface allowing light to pass through was difficult, especially at a low price point.  In addition, mounting the glass in a frame was also difficult, the entire mount/frame took about 13 hours to complete.  The Kinect was also another difficult problem.  While getting Skelton Tracking working on a Windows/x86 host was trivial, the problem arose when getting the software to work on ARM systems such as the BBB.  The software that originally was chosen to do skeleton tracking was OpenNI, NiTE, and PrimeSense Sensor Kinect.  The only issue is Apple bought and shut this company down, and their old software is no longer updated.  The ARM package of NiTE had segmentation faults on the BBB, and after countless hours was determined to be unworkable.  A switch to ROS (Robot Operating System), but the switch was not made in time to get tracking working.  What *did* work however, was retrieving the camera and depth map from the Kinect.  
  
The project was an extremely good learning experience, and the team can not wait to add tracking functionality in the future.  While this may not even be possible on ARM or a slow core like the BBB, it will continue to be researched.  The tasks we already completed were difficult and excellent.
+
The project was an extremely good learning experience, and the team can not wait to add tracking functionality in the future.  While this may not even be possible on ARM or a slightly sluggish core like the BBB, it will continue to be researched.  The tasks we already completed were difficult and excellent and looking back, adding the Kinect functionality could have been an entire project on its own. But alas, this project will continue and part 2 does consist of interfacing both Kinect and user.
  
 
{{YoderFoot}}
 
{{YoderFoot}}

Latest revision as of 08:43, 26 October 2017

thumb‎ Embedded Linux Class by Mark A. Yoder


Team members: Hazen Hamather and Luke Kuza

Grading Template

I'm using the following template to grade. Each slot is 10 points. 0 = Missing, 5=OK, 10=Wow!

09 Executive Summary
05 Installation Instructions 
08 User Instructions
09 Highlights
09 Theory of Operation
10 Work Breakdown
10 Future Work
10 Conclusions
10 Demo
10 Not Late
Comments: Nice project.  The report is a bit weak in how to make it work on my system.
I look forward to seeing the painted frame and the motion sensor working. 

Score:  90/100

Executive Summary

Smart Glass will be a spin off of the original Magic Mirror developed by Michael Teeuw. The team plans to build a setup that is very similar to his but take it a step further and implement motion control by using an Xbox Kinect. The hardware aspect of this project is 95% complete outside of a fresh coat of paint and a mounting bracket. The frame and housing has been constructed and the mirror film applied to the glass and installed in the frame. The monitor that rests behind the glass has been securely fastened into place and an external button has been soldered to it to turn the monitor on and off as desired by the user. Currently, we have discovered a way to access some of the functionality associated with the Kinect such as IR distance and the camera itself, utilizing OpenKinect, which is an open source Kinect library. We have also designed a sleek (at least we think so) user interface that displays information the user might find important or interesting. One of the major problems continuing to plague the team is the lack of access to the Kinect. OpenKinect has limited capabilities with respect to our project scope, leading us to seek out other methods such as OpenNI2.

Packaging

A front picture is needed.

Smart Glass was designed and built with the idea of possible repairs and upgrades in mind. The front facade provides a nice framed look while the rear is all business. The display sitting on the glass can be easily removed with only a few screws. Two screws hold the monitor in place laterally and gravity (combined with a couple of shims) securely holds the monitor in place vertically. This was done to make removal easy, which we did numerous times during not only the build phase but the programming phase as well.

Maybe on an unlucky day, the glass gets bumped and shatters. As sad as that would be, replacing it is quite simple. The facade holding the now shattered glass can be removed with the removal of the 8 screws seen around the edges. After this has been done, another 8 screws around the edges of the facade hold the glass in. Once those screws are removed, the facade comes apart, allowing for new glass to be inserted. Good thinking ahead.

These are only a few of the ways in which design and packaging were taken into account. Should anybody need to access internal components, we feel they will be pleased with our overall design.

Installation Instructions

The repository for this project can be found on github. Installation is very simple. Simply clone the repository:

alarm@alarm$ git clone https://github.com/hazenhamather/SmartGlass


Many details seem to be missing here.

User Instructions

Once again, getting the mirror up and running is as easy as installation. First cd into the proper directory:

alarm@alarm$ cd ~/SmartGlass/SmartGlass/FlipClock-master

Next, open TestingAPI.html in Chrome. How do I do this?


Done! This file will work on either the BeagleBone or a host computer. In order to move the BeagleBone to display mode a monitor of some kind, run:

startxfce4 Where do I get this?

This will give the GUI that will allow you to open TestingAPI.html on the bone. Also, be sure to download Chromium to view.

Note: This project was built for Arch but any modifications that need to be made to run on Debian or another distribution should be quite minor.

Highlights

The highlights of this project are certainly the durable frame combined with the simple user interface that is extremely modular. With the wifi chip hooked up the the BeagleBone, we can push updates, change the look, and even add features as time goes on, all without ever taking the frame off the wall. The usability is incredible and the user experience will significantly improve when full functionality of the Kinect is achieved.

SmartMirrorProgress.jpg

Here is a YouTube demo of the project in action with an explanation.

Theory of Operation

The BeagleBone is running a simple webpage that, every few minutes, makes GET requests to multiple APIs. These APIs are returning JSON elements full of data, allowing us to pick what pieces of data we would like to display. This keeps the interface current. One of the items on the display, the famous quote, is only called 4 times per day due to limited amounts of calls allowed by the provider of the API. At a later date, the BeagleBone will be doing image processing in order to alter the contents of what the user is experiencing via user input through the Kinect.

As far as the mirror goes, the film we applied to the glass gives the side with the most light a reflective look. This is the secret to allowing the monitor to project through the backside. Since the inside of the frame is completely dark and the outside is bright, the user will see their reflection while at the same time seeing the display through the glass, giving the appearance of the "magic" mirror, as originally coined by Michael T.

Work Breakdown

The large milestones we have encountered have been designing the frame, constructing the frame, and setting up our interface combined with motion detection controls. Luke originally had the idea of how to mount the glass into a frame and Hazen took that idea and created a SolidWorks model of what the final product should look like. Then Hazen provided the group with a Bill of Materials (BOM) which Luke took to the department in order to get funding. Using that BOM, we gathered the right construction materials and began the building process. Hazen took control of the building process and Luke handled the electrical part of altering our monitor to suit our needs. Hazen completed the user interface and handled the web programming and Luke was the backbone in making sure our hardware was always ready to keep moving forward. Both Hazen and Luke are continuing their work on the Kinect in an attempt to mate it and the interface for a complete end user experience.

Future Work

One neat thing we would like to see done at some point is create this same project but run Windows instead. Using Windows for development opens up most (if not all) of the capabilities of the Kinect granted to it by Microsoft. It would make for an easier gesture recognition and also give the ability to do some very complex motion tracking or even facial recognition. Advanced techniques such as the previous could allow the same mirror to be customized for multiple users.

Conclusions

The project was a lot more challenging than the team originally anticipated. The challenging of creating a reflective surface allowing light to pass through was difficult, especially at a low price point. In addition, mounting the glass in a frame was also difficult, the entire mount/frame took about 13 hours to complete. The Kinect was also another difficult problem. While getting Skelton Tracking working on a Windows/x86 host was trivial, the problem arose when getting the software to work on ARM systems such as the BBB. The software that originally was chosen to do skeleton tracking was OpenNI, NiTE, and PrimeSense Sensor Kinect. The only issue is Apple bought and shut this company down, and their old software is no longer updated. The ARM package of NiTE had segmentation faults on the BBB, and after countless hours was determined to be unworkable. A switch to ROS (Robot Operating System), but the switch was not made in time to get tracking working. What *did* work however, was retrieving the camera and depth map from the Kinect.

The project was an extremely good learning experience, and the team can not wait to add tracking functionality in the future. While this may not even be possible on ARM or a slightly sluggish core like the BBB, it will continue to be researched. The tasks we already completed were difficult and excellent and looking back, adding the Kinect functionality could have been an entire project on its own. But alas, this project will continue and part 2 does consist of interfacing both Kinect and user.




thumb‎ Embedded Linux Class by Mark A. Yoder