Time-to-Adoption Horizon: Four to Five Years
Thanks in part to the Nintendo Wii, the Apple iPhone and the iPad, many people now have some immediate experience with gesture-based computing as a means for interacting with a computer. The proliferation of games and devices that incorporate easy and intuitive gestural interactions will certainly continue, bringing with it a new era of user interface design that moves well beyond the keyboard and mouse. While the full realization of the potential of gesture-based computing remains several years away, especially in education, its significance cannot be underestimated, especially for a new generation of students accustomed to touching, tapping, swiping, jumping, and moving as a means of engaging with information.
It’s almost a cliché to say it, but the first exposure to gesture-based computing for many people may have occurred over a decade ago when they saw Tom Cruise in Minority Report swatting information around in front of him by swinging his arms. The fact that John Underkoffler, who designed the movie’s fictional interface, presented a non-fiction version of it, called the G-Speak, in a TED Talk in 2010, fittingly asserts the growing relevance and promise of gesture-based computing. The G-Speak tracks hand movements and allows users to manipulate 3D objects in space. This device, as well as SixthSense, which was developed by Pranav Mistry while at the MIT Media Lab and uses visual markers and gesture recognition to allow interaction with real-time information, has ignited the cultural imagination regarding the implications for gesture-based computing. This imagination is further fueled by the Kinect system for the Xbox, which continues to explore the potential of human movement in gaming. In short, gesture-based computing is moving from fictional fantasy to lived experience.
The approaches to gesture-based input vary. The screens for the iPhone, iPad and the multi-touch Surface from Microsoft all react to pressure, motion, and the number of fingers used in touching the devices. Some devices react to shaking, rotating, tilting, or moving the device in space. The Wii, for example, along with similar gaming systems, function by combining a handheld, accelerometer-based controller with a stationary infrared sensor to determine position, acceleration, and direction. Development in this area centers on creating a minimal interface, and in producing an experience of direct interaction such that, cognitively, the hand and body become input devices themselves. The Sony PlayStation 3 Motion Controller and the Microsoft Kinect system both move closer to this ideal.
The technologies for gesture-based input also continue to expand. Evoluce has created a touch-screen display that responds to gestures, and is working on a way to allow people to interact with Windows 7 through the Kinect system. Similarly, students at the MIT Media Lab have developed DepthJS, which unites the Kinect with the web, allowing users to interact with the Google Chrome web browser through gestures. Also at MIT, researchers are developing inexpensive gesture-based interfaces that track the entire hand. Elliptic Labs recently announced a dock that will let users interact with their iPad through gestures.
Another direction for technological innovation centers on haptics, which refers to the tactile feedback communicated to a user. At McGill University researchers are developing a haptic feedback system that allows people with visual impairments to get more feedback with fine degrees of touch, and a researcher with the Media Computing Group at RWTH Aachen University, Germany, has created a localized active haptic feedback interface called MudPad for fluid touch interfaces that promises to offer more nuanced ways to interact with screens through touch.
Other researchers are exploring ways to use gestural computing with mobile devices. GestureTek’s Momo software, for example, uses two different trackers to detect motion and the position of objects, and is designed to bring gesture-based computing to phones. iDENT Technology’s Near Field Electrical Sensing Interfaces is designed to allow mobiles to respond to grip and proximity sensing. A ringing mobile will put the call through if it is picked up and held, but will send it to voice mail if it is picked up and quickly put down again.
While gesture-based computing has found a natural home in gaming, as well as in browsing files, its potential uses are far more broad. The ability to move through three-dimensional visualizations could prove compelling and productive, for example, and gesture-based computing is perfect for simulation and training. Gesture-based computing has strong potential in education, both for learning, as students will be able to interact with ideas and information in new ways, and for teaching, as faculty explore new ways to communicate ideas. It also has the potential to transform what we understand to be scholarly methods for sharing ideas.
Gesture-based computing is changing the ways that we interact with computers, both physically and mechanically. As such, it is at once transformative and disruptive. Researchers and developers are just beginning to gain a sense of the cognitive and cultural dimensions of gesture-based communicating, and the full realization of the potential of gesture-based computing within higher education will require intensive interdisciplinary collaborations and innovative thinking about the very nature of teaching, learning, and communicating.
Relevance for Teaching, Learning,Research, or Creative Inquiry
Gesture-based computing has already proven productive in training simulations that operate almost exactly like their real-world counterparts. Gestural interfaces can allow users to easily perform precise manipulations that can be difficult with a mouse, as the video editing system Tamper makes plain (see the demonstration video at http://www.youtube.com/user/oblongtamper). Gesture-based computing also opens up unparalleled avenues of accessibility, interaction, and collaboration for learners.
Imagine an interface that allows students to determine or change the DNA of a fruit fly by piecing it together by hand, page through a fragile text from the Middle Ages, or practice surgical operations using the same movements a surgeon would. With gestural interfaces, discovery-based learning opportunities like these are likely to be common scenarios. Although these examples are hypothetical, research in the field of gesture-based computing is expanding rapidly and early results show that applications like these are not far-fetched.
While one direction for gesture-based computing attempts to recreate or improve upon existing practices, a more compelling direction for gesturebased computing in the context of learning will move beyond replicating what is already known in order to create entirely new forms of interaction, expression, and activity, along with the metaphors needed to make them comprehensible.
A sampling of applications of gesture-based computing across disciplines includes the following:
- Art. The UDraw GameTablet uses the Wii Controller to combine gestures for creating drawings and gaming, indicating directions for using gesture-based technology to expand creative inquiry through gaming and art.
- Education. The research agenda for the Media Design Program at Art Center College of Design includes educational technologies that use gesture-based computing, and students focus on creating new interfaces for learning.
- Music. The EyeMusic project at the University of Oregon uses eye-tracking sensors to compose multimedia productions based on the movements of the user’s eyes.
Gesture-Based Computing in Practice
The following links provide examples of how gesturebased computing is being used in higher education settings.
A pair of MIT graduate systems have created a gesture-based interaction system using off the shelf computer cameras and a pair of Lyvra gloves that would cost $1 to produce.
Auckland Museum’s Hybridiser Exhibit (video)
This innovative project at the Auckland Museum uses touch-screen interfaces to allow visitors to create custom virtual orchids in lifelike detail.
This project, being developed at the University of Oregon, uses eye-movement to create drawings on a computer screen. The sensors can track eye motion and give users fine control over the image they compose.
Laterotactile Rendering of Vector Graphics with the Stroke Pattern
(Vincent LОvesque1 and Vincent Hayward, Proc. of Europhaptics 2010, Part II, Kappers, A.M.L. et al. (Eds.), LNSC 6192, Springer-Verlag, pp. 25–30, 2010.) At McGill University researchers are developing a haptic feedback system that allows people with visual impairments to get more feedback with fine degrees of touch.
Created by students at Ball State University, this project uses body gestures to adjust the light in a room for optimal viewing results. Designed for use in the fashion industry, the system offers an integrated lighting and sensor system, much of it built using the open-source Arduino prototyping platform.
(Yvonne Jansen, RWTH Aachen University Media Computing Group, 2010.) Researchers in the Media Computing Group at RWTH Aachen University are developing a localized active haptic feedback interface called MudPad for fluid touch interfaces in order to offer more nuanced ways to interact with screens through touch.
For Further Reading
The following articles and resources are recommended for those who wish to learn more about gesture-based computing.
7 Areas Beyond Gaming Where Kinect Could Play A Role
(Alex Howard, O’Reilly Radar, 3 December 2010.) This post looks at how the gesture-based Kinect System from Microsoft can have broad use beyond its intended use as a gaming platform. Uses include applications in art, health and education.
Controlling Phones With the Body Electric
(Ashlee Vance, NYTimes.com, 17 February 2010.) At the 2010 Mobile World Congress, technology companies demonstrated technologies that can detect disruptions to electrical fields allowing a smartphone to perform certain functions when this happens, such as answering the phone without a need for pushing a button on the device. Other technology demonstrated includes the use of eye-movements to control computer functions on mobile devices.
Delicious: Gesture-Based Computing
Follow this link to find additional resources tagged for this topic and this edition of the Horizon Report, including the ones listed here. To add to this list, simply tag resources with “hz11” and “gesturecomputing” when you save them to Delicious.
Is Apple Considering Next-Gen Tactile Feedback for iOS Devices?
(Jack Purcher, PatentlyApple.com, 2 August 2010) Apple is exploring potential technology that would bring tactile feedback to it’s mobile devices, giving users new levels of feedback and interaction aside from just simple touch gestures. A unique feature of this technology provided by Senseg is the lack of mechanical motors, so there are no moving parts to break or wear out.
New Interaction Rituals: Getting the Playful Interfaces We Deserve
In this presentation from 2007, Julian Bleecker asks how we might take an art-technology approach to interface design that is gestural to create more playful experiences.
Point, Click: A Review of Gesture Control Technologies
(Damian Rollison, VentureBeat.com, 9 February 2010.) This article discusses the key developers and platforms working with gesture-based technologies.