Easily configurable gesture sequences for speech impaired people to interact with their smart devices.
We both wanted to make something that could improve at least one person's life and we were both super excited to work with OpenCV and learn about computer vision from the programmer perspective.
Processes images from a webcam using opencv and uses convex hulls along with convexity defects to figure out how many fingers the user is holding up. Tracks progress and accuracy of hand detection to monitor when a user completes a list of moves constituting a valid gesture. These gestures can be defined and modified by the user from the website.
We used Meteor, React, MongoDB, and Bootstrap for the website. Our hand detection was coded in Python.
Oh boy... all of them. Getting rid of background noise from a terrible quality camera in bad lighting is.... difficult, we'll call it. Integrating with existing smart devices is difficult if you're trying to call into the as opposed to extending their functionality.
The first time we drew a convex hull around an outlined hand, felt like we actually knew what we were doing for a few seconds (of course we learned that we actually didn't have a clue what we were doing still).
I'll probably have 'cv2.<xyz>' stuck in my muscle memory for a month or two. Learned how to use react with bootstrap and developed a much better appreciation for all of the products in our lives that know what things are just by looking at them for a short time.
Integration between many different smart devices. Currently our web app is just a proof of concept, it doesn't actually communicate with our opencv program to set what to look for, that would be the next step.
Python, OpenCV, Meteor, Bootstrap, React
Intel® Movidius™ Neural Compute Stick
TBI Pro Gaming Headset
$100 Amazon Gift Cards
Grand Prize
Lutron Caseta Wireless Kit
Misfit Shine 2