Leap Motion: A big leap for Interaction Design

I’m pretty confident you all heard of the Leap, a black box that will “revolutionize the way we interact with computers”. More than half a year ago already this box was announced with some spectacular examples, such as flying through a data visualisation and pinch zooming a world map. Of course, the mere fact that this device is able to track both of your hands real-time in tree dimensions allows for way more interesting applications than pinch zooming a world map, but nevertheless the introduction looked awesome.

After this introduction, multiple development versions of the leap have been distributed amongst all kinds of developers. As far as I have seen multiple revisions of the device circulate ( for example revision 5 supports finger joints, while revision 6 should be the “final” version) on the internet, which slowly head towards the final revision, which will be for sale “soon”…

To help all these developers, many different frameworks and programming languages are supported through the API. For example, Unity,an easy to use game engine is supported, but there’s also support for creating browser-based apps in WebGL.

So how does this box even work? Looking at the different videos of Developer’s Kits, it seems like the Leap only uses two camera’s to do the complete tracking work. It’s plausible that these camera’s are just “regular” camera’s, while the software behind these camera’s actually creates all the magic.

Some Examples

So far, only some developers where lucky enough to have worked with the Leap. I’ll show a few of these videos to demonstrate what the Leap is capable of.

Above video clearly shows real-time manipulation of objects in a physics simulation. This is one of the most obvious and valuable applications of Leap; until now, we haven’t been able to directly interact with 3D virtual objects in our computers.

This video shows different ways you can work with the leap. I especially like the browser integration, since this opens the way to custom gesture interactions on each website. This would definitely make the internet a more interesting place!

Something I noticed while watching this is the input “lag” during the video. He honestly pointed out that the Leap works less good in low lightning situations, switching to “robust mode”. This also means the camera’s inside the Leap are just regular ones, creating  3D images by combining videos from both camera’s.

This is a fairly straightforward idea and works just as expected!



Whether the Leap will become a success and eventually replace the mouse or keyboard, depends on the “apps” created for this piece of hardware. The amount and the quality of available apps is important to ensure that the Leap will hit the market  with a bang! What’s nice about this idea is that basically every application can be A short-list of app request for the leap:

I would love to work on any of my ideas once I have a Leap of my own; no more clicking and typing,  just smile and wave boys, smile and wave…

So what to expect from this leap device? The technology in the box looks pretty simple (“just” two cameras), but behind these components something much more valuable has been created. The possibility to interact with your computer in 3 dimensions using all 10 fingers. It’s difficult to say exactly what impact this device will have on Human Computer Interaction in general, but it’s evident that the Leap can do just as much as you can imagine.