Time-to-Adoption Horizon: Four to Five Years
For nearly forty years, the keyboard and mouse have been the primary means to interact with computers. The Nintendo Wii in 2006 and the Apple iPhone in 2007 signaled the beginning of widespread consumer interest in — and acceptance of — interfaces based on natural human gestures. Now, new devices are appearing on the market that take advantage of motions that are easy and intuitive to make, allowing us an unprecedented level of control over the devices around us. Cameras and sensors pick up the movements of our bodies without the need of remotes or handheld tracking tools. The full realization of the potential of gesture-based computing is still several years away, especially for education; but we are moving ever closer to a time when our gestures will speak for us, even to our machines.

Overview

It is already common to interact with a new class of devices entirely by using natural gestures. The Microsoft Surface, the iPhone and iPod Touch, the Nintendo Wii, and other gesture-based systems accept input in the form of taps, swipes, and other ways of touching, hand and arm motions, or body movement. These are the first in a growing array of alternative input devices that allow computers to recognize and interpret natural physical gestures as a means of control. We are seeing a gradual shift towards interfaces that adapt to — or are built for — humans and human movements. Gestural interfaces allow users to engage in virtual activities with motion and movement similar to what they would use in the real world, manipulating content intuitively. The idea that natural, comfortable motions can be used to control computers is opening the way to a host of input devices that look and feel very different from the keyboard and mouse.

As the underlying technologies evolve, a variety of approaches to gesture-based input are being explored. The screens of the iPhone and the Surface, for instance, react to pressure, motion, and the number of fingers touching the devices. The iPhone additionally can react to manipulation of the device itself — shaking, rotating, tilting, or moving the device in space. The Wii and other emerging gaming systems use a combination of a handheld, accelerometer-based controller and stationary infrared sensor to determine position, acceleration, and direction. The technology to detect gestural movement and to display its results is improving very rapidly, increasing the opportunities for this kind of interaction. Two new gaming systems are expected to be released in 2010 — a Sony platform based on a motion sensor code-named Gem, and the Microsoft Natal system. Both of these systems take a step closer to stripping the gesture-based interface of anything beyond the gesture and the machine, at least in terms of how it is experienced by the user.

Gesture-based interfaces are changing the way we interact with computers, giving us a more intuitive way to control devices. They are increasingly built into things we can already use; Logitech and Apple have brought gesture-based mice to market, and Microsoft is developing several models. Smart phones, remote controls, and touch-screen computers accept gesture input. As more of these devices are developed and released, our options for controlling a host of electronic devices are expanding. We can make music louder or softer by moving a hand, or skip a track with the flick of a finger. Apple’s Remote app for the iPhone turns the mobile device into a remote control for the Apple TV; users can search, play, pause, rewind, and so on, just by gliding a finger over the iPhone’s surface. Instead of learning where to point and click and how to type, we are beginning to be able to expect our computers to respond to natural movements that make sense to us.

Currently, the most common applications of gesture-based computing are for computer games, file and media browsing, and simulation and training. A number of simple mobile applications use gestures. Mover lets users “flick” photos and files from one phone to another; Shut Up, an app from Nokia, silences the phone when the user turns it upside down; nAlertme, an anti-theft app, sounds an alarm if the phone isn't shaken in a specific, preset way when it is switched on. Some companies are exploring further possibilities; for instance, Softkinetic (http://www.softkinetic.net) develops platforms that support gesture-based technology, as well as designing custom applications for clients, including interactive marketing and consumer electronics as well as games and entertainment.

Because it changes not only the physical and mechanical aspects of interacting with computers, but also our perception of what it means to work with a computer, gesture-based computing is a potentially transformative technology. The distance between the user and the machine decreases and the sense of power and control increases when the machine responds to movements that feel natural. Unlike a keyboard and mouse, gestural interfaces can often be used by more than one person at a time, making it possible to engage in truly collaborative activities and games. Our perception of the kinds of activities that computers are good for is also altered by gestural interaction — activities that require sweeping movements, such as many sports or exercises, are suited to gestural interfaces.

Relevance for Teaching, Learning, or Creative Inquiry

The kinesthetic nature of gesture-based computing will very likely lead to new kinds of teaching or training simulations that look, feel, and operate almost exactly like their real-world counterparts. The very ease and intuitiveness of a gestural interface makes the experience seem very natural, and even fun. Already, medical students benefit from simulations that teach the use of specific tools through gesture-based interfaces, and it is easy to see how such interfaces could be applied in the visual arts and other fields where fine motor skills come into play. When combined with haptic (touch or motion-based) feedback, the overall effect is very compelling.

Larger multi-touch displays support collaborative work, allowing multiple users to interact with content simultaneously. In schools where the Microsoft Surface has been installed in study areas, staff report that students naturally gravitate to the devices when they want to work together to study collaboratively. The promotional video for Microsoft’s Natal system shows a family taking on different roles in a racing game — driver, pit crew — and suggests that role-playing activities where several students work together to perform different but related tasks will be a scenario made common by tools that use gesture-based computing.

Pranav Mistry, while at the MIT Media Lab, developed a gesture-based system called Sixth Sense that uses markers to allow interaction with all sorts of real-time information and data in extremely intuitive ways. He recently announced the release of the platform into open source (http://www.youtube.com/watch?v=YrtANPtnhyg), which is likely to stimulate a raft of new ideas. Mgestyk’s gesture-based system of control uses a 3-dimensional camera to capture user movements. The system has been demonstrated with Microsoft Flight Simulator, and allows players to fly a simulated plane by simply moving their hands — without any joystick or remote (see http://www.youtube.com/watch?v=FZyErkPjOR8). The system is expected to release to market in late spring 2010 at a cost comparable to that of a high-end webcam. It is not difficult to picture similar applications, a little further down the road, that could be used to simulate many kinds of experiences.

A sampling of applications for gesture-based computing across disciplines includes the following:

  • Kinesiology. Dutch company Silverfit uses a gesture-based system to deliver fitness games designed for the elderly. Used in elder care organizations, the games provide gentle exercise and “activity of daily life” practice.
  • Medicine. Digital Lightbox by BrainLAB is a multi-touch screen that allows doctors and surgeons to view and manipulate data from MRI, CT, x-ray, and other scan images. The system integrates with hospital data sources to enable health professionals to collaborate throughout the cycle of treatment.
  • Sign Language. Researchers at Georgia Tech University have developed gesture-based games designed to help deaf children learn sign language. Deaf children of hearing parents often lack opportunities to pick up language serendipitously in the way hearing children do; the game provides an opportunity for incidental learning.
  • Surgical Training. After discovering the significant improvement in dexterity that surgeons-in-training gained from interacting with the Wii (in one study, those who warmed up with the Wii scored an average of 48% higher on tool tests and simulated surgical procedures than those who did not), researchers are developing a set of Wii-based medical training materials for students in developing countries.

Gesture-Based Computing in Practice

The following links provide examples of gesture-based computing.

CMU Grad Students Build 3-D Snowball Fight
http://www.post-gazette.com/pg/09308/1010559-96.stm
(Ann Belser, Pittsburgh Post-Gazette, 4 November 2009.) As an assignment, several graduate students at Carnegie Mellon University created a gesture-based snowball fight game using PC software and components from the Nintendo Wii.

Microsoft's Finally Got Game
http://blog.newsweek.com/blogs/techtonicshifts/archive/2009/11/05/microsoft-s-finally-got-game.aspx
(Nick Summers, Newsweek, 5 November 2009.) Microsoft's Project Natal engages full-body movement to interact with this game console — without any kind of controller or remote. The product, still in development, uses an infrared light and camera to sense the users' movements, eliminating the need for hand-held equipment and placing the user’s own silhouette in the game world.

Parkinson's Patients Go to Wii-hab
http://www.livescience.com/technology/090611-wii-parkinsons.html
(LiveScience, 11 June 2009.) In a study undertaken at the Medical College of Georgia’s School of Allied Health Sciences, Parkinson's patients showed significant improvement when playing games on the Wii was added to their therapy.

University Offers New Technology to Help Students Study
http://www.unr.edu/nevadanews/templates/details.aspx?articleid=5194&zoneid=14
(Skyler Dillon, Nevada News, 1 October 2009.) The Mathewson-IGT Knowledge Center at the University of Nevada in Reno has installed two Microsoft Surfaces in its study area and developed a custom anatomy study guide. Placing a coded lab assignment or tagged model on the screen calls up diagrams related to the material. Students can manipulate the diagrams using hand and finger gestures while they study independently or collaboratively.

The Virtual Autopsy Table
http://www.visualiseringscenter.se/1/1.0.1.0/230/2/
Researchers at Norrkoping Visualization Center and the Center for Medical Image Science and Visualization in Sweden have created a virtual autopsy using a multi-touch table. Detailed CT scans are created from a living or dead person and transferred to the table where they are manipulated with the hands, allowing forensic scientists to examine a body, make virtual cross-sections, and view layers including skin, muscle, blood vessels, and bone.

For Further Reading

The following articles and resources are recommended for those who wish to learn more about gesture-based computing.

The Best Computer Interfaces: Past, Present, and Future
http://www.technologyreview.com/computing/22393/page1
(Duncan Graham-Rowe, Technology Review, 6 April 2009.) This article discusses a variety of human-computer interfaces, including gesture-sensing, voice recognition, and multi-touch surfaces.

A Better, Cheaper Multitouch Interface
http://www.technologyreview.com/computing/22358/?a=f
(Kate Greene, Technology Review, 30 March 2009.) New York University is developing a new multi-touch interface that accepts gesture-based input on a specially designed pad. The Inexpensive Multi-touch Pressure Acquisition Device (IMPAD) is a very thin surface that can be used on a desktop, a wall, a mobile device, or a touch screen.

Sony Motion Controller Demo: Dueling Domino Snakes
http://www.shacknews.com/onearticle.x/60518
(Nick Breckon, ShackNews, 18 September 2009.) Sony is developing a motion controller to be released in 2010. This article includes a video demonstration of some of the system’s capabilities. The system is characterized as somewhere in between the Nintendo Wii and the unreleased Microsoft Natal system in terms of how it is controlled.

Touching: All Rumors Point To The End Of Keys/Buttons
http://www.techcrunch.com/2009/09/29/touching-all-rumors-point-to-the-end-of-keysbuttons/
(MG Siegler, TechCrunch, 29 September 2009.) This article describes a number of touch- and gesture-based devices from Apple and speculates on what might be forthcoming.

Why Desktop Touch Screens Don't Really Work Well For Humans
http://www.washingtonpost.com/wp-dyn/content/article/2009/10/13/AR2009101300113.html
(Michael Arrington, The Washington Post, 12 October 2009.) Desktop touch screens are available (like the HP TouchSmart line), but they are difficult to use over long periods. This article suggests another design approach.

Delicious: Gesture-Based Computing
http://delicious.com/tag/hz10+altinput
Follow this link to find additional resources tagged for this topic and this edition of the Horizon Report. To add to this list, simply tag resources with “hz10” and “altinput” when you save them to Delicious.

Posted by NMC on January 14, 2010
Tags: chapters

Total comments on this page: 9

How to read/write comments

Comments on specific paragraphs:

Click the icon to the right of a paragraph

  • If there are no prior comments there, a comment entry form will appear automatically
  • If there are already comments, you will see them and the form will be at the bottom of the thread

Comments on the page as a whole:

Click the icon to the right of the page title (works the same as paragraphs)

Comments

No comments yet.

[...] Four to Five Years: Gesture-Based Computing (0) [...]

January 16, 2010 1:14 am

[...] – Four Years:  Gesture-Based Computing and Visual Data Analysis. For those of us who use data in our work, the latter of these will be [...]

January 20, 2010 6:05 am

[...] Gesture Based Computing (4-5 years) [...]

January 26, 2010 11:29 pm

[...] finns länkar till de webbplatser vi nämner i programmet: Specialpedagogiska Myndigheten Horizon Report Wikipedia Wikispaces Webbstjärnan pixlr Nintendo Wii Project Natal Microsoft Surface Sixth Sence [...]

May 29, 2010 1:13 am

[...] Media Consortium, “The 2010 Horizon Report.” (2010). Retrieved from: http://wp.nmc.org/horizon2010/chapters/gesture-based-computing/ Possibly related posts: (automatically generated)Week 9- Research Paper ReviewQuestion one:Report [...]

September 15, 2010 3:45 pm

[...] Gesture-Based Computing – Interfaces created based on natural human gestures; For example the WII… [...]

October 22, 2010 7:55 pm

[...] Stone, S. (2010). The 2010 Horizon Report. Austin, Texas: The New Media Consortium. accessed from: http://wp.nmc.org/horizon2010/chapters/gesture-based-computing/ GA_googleAddAttr("AdOpt", "1"); GA_googleAddAttr("Origin", "other"); [...]

February 12, 2011 9:12 pm

[...] ans: l’interface contrôlée par le mouvement Pour un aperçu, visionner ce que le Kinect [...]

March 9, 2011 7:03 am

[...] the rise of gesture based computing, as seen in iPhones, iPads, Androids, and several gaming systems, accessibility and ability to [...]

May 25, 2011 8:47 pm
Name (required)
E-mail (required - never shown publicly)
URI