With hand gestures recalling those that first reached the mainstream in Minority Report, “Airborne Beats” lets you make music just by gesturing with your hands and fingers in mid-air. You can drag around audio samples, and make gestures for control, controlling both production and performance.
Coming from the labs at Oblong, it’s the latest etude in a long series of these kind of interfaces (see below). They in turn point out this could work with any time-based interface. And because of the nature of the interface, it also makes those tasks collaborative.
If you’re wondering how the app was built, Airborne Beats was programmed in C++ using g-speak and the creative coding library, Cinder. As for hardware, we pulled together one screen equipped with an IR sensor, a “commodity” computer (with g-speak installed), and speakers. For the most-part, it was designed and developed by a single person. Although Airborne Beats is currently a demo, the users of this application could be composers, DJs, or perhaps even children or educators. And the ability to recognize multiple hands opens up some unique collaborative possibilities (guest DJ, anyone?).
Now, it’s a clear a lot of work and talent went into the app. But I can’t help but notice that the results are, frankly, a bit awkward. (Of course, that’s why testing and experimentation is so valuable: there’s no substitute for trying things out.) There’s some really clever stuff in there, including the overlay of envelopes atop waveforms and the way the interactions work, particularly grabbing audio from a pool. But while it shows potential, it’s also hard to see a lot of advantages over the conventional input for the same interface.