First Experience with OpenFrameworks – Face detection and video

Last two weeks I have been working with OpenFrameworks quite intensively  in order to realise an interactive video installation. In this post I’d like to share some wisdom with you about OpenFrameworks, because I came across quite some issues and not all the issues were documented elsewhere.

OpenFrameworks is basically a very useful collection of C++ libraries, making it easy for (creative) coders to quickly create visual (interactive) software without reinventing the wheel. The base of these kinds of applications are always the same; creating a window, creating a canvas to draw on, and creating shapes to draw, imageloaders, an event system etcetera. For all these features, OpenFrameworks has a library available.

Therefore, when defining the installation to create, I quickly noticed OpenFrameworks was the way to go. The installation needed to be able to do the following:

OpenFrameworks was capable of carrying out all of the above points, together with some additional libraries. So, back to the story, what did I learn? Because I wrote the final code for the installation in only 4 days time, I won’t post the full code, since it will not be a good base for other people to work with. Instead, I will share some key snippets that might be helpful for your own work with OpenFrameworks

Extracting the face from video

The basic functionality of the installation revolves around facetracking using the ofxFaceTracker library by Kyle McDonald. This library is able to return a mesh of the whole face, but also of its individual features, such as the nose, eyes and mouth. These features are returned as polylines, which have a function called inside(), which is able to check whether a point is inside the shape of the polyline (e.g. the eyes or mouth) or outside. This ideas formed the base of filtering the face out of a webcam feed.

 //tracker is an instance of ofXFaceTracker, there are plenty example how to set it up
 ofPolyline facePoints = tracker.getImageFeature(ofxFaceTracker::FACE_OUTLINE); //getting the Polyline for the whole face

 ofRectangle faceBox = facePoints.getBoundingBox(); //Bounding box of the face
 ofPoint faceCenter = faceBox.getCenter();

 ofPixels pixels;
 cam.getTextureReference().readToPixels(pixels); //copy cam image to ofPixels
 pixels.crop(faceBox.x,faceBox.y,faceBox.width,faceBox.height);//Crop them to the bounding box of the recognized face

 int totalPixels = pixels.getWidth()*pixels.getHeight();
 for (int x = 0; x < pixels.getWidth(); x++){  
   for (int y = 0; y < pixels.getHeight(); y++){  
                             //make a point to check whether it is inside the face, but include the x and y of the bounding box
     ofPoint checkpoint = ofPoint(x+faceBox.x,y+faceBox.y);

     if(facePoints.inside(checkpoint)){ //if inside, do nothing
     } else {
       ofColor b = ofColor(0);//black
       pixels.setColor(x,y,b); //Make current pixel black
     }
   }
 }
 //ofLog(OF_LOG_NOTICE, "Processed, new width: %d, new height: %d",pixels.getWidth(),pixels.getHeight());
 ofImage videoImage;
 videoImage.setFromPixels(pixels); //Finally, create an ofImage out of the set of pixels

So the above code basically takes care of extracting the face out of the video feed. Instead of colouring the pixels black, you could also decide to make them transparent or adjust the colours in a different way.

Saving images to video

Over the days I have tried various libraries that were able to create videos from OfImage instances. Some of them worked with the Quicktime API, others with a batch script for ffmpeg. In the end, none of them seemed to work when trying to write multiple video files. Until I found the ofxQTVideoSaver, on jamezilla’s github. It was pulled together from various snippets on the Ofx forums, but it was the only one from which the example project actually worked. So if you want to create video files from ofImage instances, compressed in quicktime format, this library is the way to go. This is basically how I used it:

// figure out how much time elapsed since the last frame
float time = ofGetElapsedTimef() - mTimestamp; //length since last frame
float length = ofGetElapsedTimef() - startTimestamp; //total length of movie in secs
if(recording == true) {
  // add this frame to the movie
  mVidSaver.addFrame(screen.getPixels(), time); //screen is a resized screengrab created somewhere else
}
if (length > 10) { //stop recording when movie is longer then 10 secs
  stopRecording();
}
// update the timestamp to the current time
mTimestamp = ofGetElapsedTimef();

These are two of the things I wanted to make recommendations on, but OpenFrameworks is capable of doing these things even more effectively using Threading for balancing the load of different processes, and by using GPU shaders to modify images. Both these features are already in OpenFrameworks, using ofFbo() and ofShader()  but were difficult for me to learn about in such a short time. What I did use to try out was a ready made library to blur the image in realtime called ofxBlur. Worked like a charm and didn’t influence the performance of the program at all!

That’s it for now, I’d like to share the video of the final installation here when it’s finished. If interested you can already view a short teaser: