To an applications programmer, the shift to gestural input is as big as the shift to the mouse was twenty-five years ago. It’s both exciting and a little daunting. The g-speak input framework allows direct, either-handed, multi-person manipulation of any pixel on any screen you can see. Pointing is millimeter-accurate from across the room. Hand pose, angle, and position in space are all available at 100 hertz, with no perceptible latency and to sub-millimeter precision.
(Quoted from oblong.com)
Oblong is a company that was set up in 2006 and whose principal work and research has been in a new spatial operating environment entitled g-speak. This new technology give pixel precise manipulation of multiple screens via hand gestures. And before images of Mr Cruise flapping his arms around pop in to your heads, the link is perfectly apt. According to Oblong’s website, one of Oblong’s founders served as science advisor to Minority Report and based the design of those scenes directly on his earlier work at MIT.
Building g-speak is a design exercise at three levels. Most obviously, there is a new graphical computing environment — a new look and feel, in our industry’s argot. Those graphics are inseperable from an architecture that motivates and produces them. Finally, we design and use applications that run on top of this foundation.
I wonder to what extent the ramifications of such a change in interaction with the computer will have on software tools for the future and indeed how we think about design with these tools. Could we for example envisage 3D modeling within such a system? The artist literally sculpting images. Funny, for some odd reason, Patrick Swayze comes to mind and that seminal sequence in Ghost. Joking apart, and beyond the obvious wow factor to this technology, I’m interested to see how such interactions take shape. Are we really on the verge of adopting another form of interaction or is this a little too close to science fiction film for it to be of practical use on a larger scale?