Description:

It takes video input, and uses a neural network to determine the posture of their subjects in conjunction with the google natural language to provide improvements to speech and posture.

Inspiration:

When we were coming up with our ideas, we were worried on how we were going to present any of them so we realized that we could build a hack that would help people improve on speeches and presentations.

What it does:

It takes a live stream of audio and video and uses Neural Nets and the Natural Language API to determine how you can improve your presentations.

How we built it:

We used the tensorflow object detection API and used the google cloud helper libraries to help us build our systems.

Challenges we ran into:

Some challenges were using tensorflow to analyse the images without crashing. The majority of the time was spent on debugging libraries so that there would not be any fatal errors.

Accomplishments that we're proud of:

Being able to get a semi working product while being forced to pivot off something entirely different.

What we learned:

Some things learned was basic java script, as well how to use tensorflow to create a neural net. Furthermore, we also learned ho

What's next:

Next would be to improve the feedback so it would be more specific, and more helpful rather than generic.

Built with:

We used tensorflow to create the neural net, as well as google's cloud api in order to use the language translation from speech to text.

Prizes we're going for:

Grand Prize

Google Home Mini

$100 Amazon Gift Cards

Intel® Movidius™ Neural Compute Stick

Team Members

Michael Lu, Darnele Adhemar, Febby Chang, Matt Avison, Srivishnu Piratla
View on Github