Description:

Face2Forte detects emotions portrayed by a user's face and procedurally generates music using Markov chains in order to reinforce that emotion.

Inspiration:

Our group's love of music, interest in computer vision, and interest in artificial intelligence inspired us to create a project that combines these fields. We wanted to analyze emotion because we appreciate the connection that music has to emotion in everyday life. Music has a strong impact on one's mood, so we decided to reverse the power-dynamic and have mood impact the music one hears.

What it does:

Our project tracks and predicts which emotions are shown on faces using OpenCV, with a camera feed streaming from a Raspberry Pi. It then uses that data to pick a generated model to procedurally create midi notes using Mido and FluidSynth.

How we built it:

We built our project using two subteams, one working on the Raspberry Pi/camera/face detection aspect of the project and the other working on training and generating music. Building involved a huge amount of trial and error, with us having to tweak many aspects of our programs, such as facial recognition tunables or organization of second-order Markov probability tables, in order to get them just right.

Challenges we ran into:

Some challenges we faced included coming up with the initial idea itself, optimization of the OpenCV code in order to make it run smoothly on a Raspberry Pi, and getting the generated music to sound adequate.

Accomplishments that we're proud of:

We're proud of being able to learn how and then implement emotion recognition, and not only understand how Markov chains can create music, but create working models of our own.

What we learned:

This project taught us about procedural generation through Markov chains, face recognition techniques, a completely new language for some of us (python), and team management (we've never worked together before), just to name a few things. The scale at which our team stepped out of our comfort zone is impressive.

What's next:

Increasing quality of models for detecting emotions, build enclosure for hardware, variation in note duration and velocity for generated music, and various other quality-of-life improvements.

Built with:

We used a Raspberry Pi, along with camera and screen. We used the python OpenCV, Fluidsynth, and Mido libraries.

Prizes we're going for:

HAVIT RGB Mechanical Keyboard

Intel® Movidius™ Neural Compute Stick

$100 Amazon Gift Cards

Grand Prize

Raspberry Pi Arcade Gaming Kit

Team Members

Jacob Dillon, Christopher Vassallo, Finn Navin, Brandon Mino
View on Github