It takes video input, and uses a neural network to determine the posture of their subjects in conjunction with the google natural language to provide improvements to speech and posture.
When we were coming up with our ideas, we were worried on how we were going to present any of them so we realized that we could build a hack that would help people improve on speeches and presentations.
It takes a live stream of audio and video and uses Neural Nets and the Natural Language API to determine how you can improve your presentations.
We used the tensorflow object detection API and used the google cloud helper libraries to help us build our systems.
Some challenges were using tensorflow to analyse the images without crashing. The majority of the time was spent on debugging libraries so that there would not be any fatal errors.
Being able to get a semi working product while being forced to pivot off something entirely different.
Some things learned was basic java script, as well how to use tensorflow to create a neural net. Furthermore, we also learned ho
Next would be to improve the feedback so it would be more specific, and more helpful rather than generic.
We used tensorflow to create the neural net, as well as google's cloud api in order to use the language translation from speech to text.
Google Home Mini
$100 Amazon Gift Cards
Intel® Movidius™ Neural Compute Stick