eGFI - Dream Up the Future Sign-up for The Newsletter  For Teachers Online Store Contact us Search
Read the Magazine
What's New?
Explore eGFI
Engineer your Path About eGFI
Autodesk - Change Your World
Overview E-tube Trailblazers Student Blog
  • Tag Cloud

  • What’s New

  • Pages

  • RSS RSS

  • RSS Comments

  • Archives

  • Meta

Surfing the Web with a Wave of the Hand


Gesture-based computer interfaces are finally within reach. MIT researchers have designed an easy to use system that involves a standard webcam and a multicolored Lycra glove.

The glove is covered with 20 irregularly shaped patches with 10 different colors so that the computer can distinguish the colors from each other and from background objects.

If two colors overlapped, the computer would not know which patch of color to respond to, so the placement of the patches prevent the same color from ever touching. For example, since the fingers might clench to make a fist, the colors on the tips of the fingers cannot be repeated on the palm of the hand.

The glove, which could be manufactured for only a dollar, can gauge the position of the hand in three dimensions with speed and precision, and since it is made from stretch Lycra, it can change size from one hand to the next.

For gamers, the technology would allow players to pick up and wield objects in a virtual world.   For engineers and designers, the system would allow 3-D models of commercial products or large civic structures to be manipulated.  And if the patterns were replicated onto shirts, whole-body information could be captured to evaluate the form of athletes or convert actors’ live performances into digital animations.

Image: Jason Dorfman/CSAIL

2 Responses to “Surfing the Web with a Wave of the Hand”

  1. Fantastic! I love technology!

  2. Using that to create a virtual keyboard should be rather effortless. Make a printable “keyboard” that has barcode and placement icons (ala Eye of Judgment style), and voila, you can have any language keyboard at your fingertips as soon as you print it.

    How about a pencil tracking program, so you can draw something on the table, and as you draw it (having no need to be aware of the computer), it can draw the same image to the screen, keeping track of draws and erases. Generate it as a vector, so that a scan of the drawing can be made, to detect what lines of that vector should have been lightly drawn or not.

    3D Clay Modelling? Say you take 3 X.Y.Z perspective drawings from the last program, convert in to a 3D clay mold, and then be able to zoom in on the mold to change it’s shape. Plastic colored ‘utensils’ to add additional texture in the virtual space.

    Virtual HUD for an aircraft simulator controlled by gloves?

    No matter what pops in to my head, I have to say…amazing how a simple webcam and some silly colored gloves can trump a lot of the motion sensing products on the market today. Where can I buy a pair?

    Perhaps an idea would be to create an application that can run on a separate computer, or in multi-GPU set ups, separate monitors, so that you can have one Monitor strictly with a ‘virtual console’ that can be user-designed to provide analog and keyboard input to whatever other applications are running.

    So say you were playing the somewhat popular EVE-Online, and for the sake of explaining the multi-computer set up as well, let’s say you have a laptop and a PC connected to say your 46″ HD TV. On your laptop, you have the virtual console application running. You choose familiar shapes like buttons and levers, or even modern keyboards, or switches, or even a LCARS (Star Trek) interface (modular programming so anyone can make an virtual ‘interface-device’ and submit it to the community). You make an interface you’re comfortable with to control what happens when you do what. Create a virtual joystick that at certain pitch, yaw, and roll, it will click on preset locations on the screen, or rotate the game camera using virtualized mouse gestures. Flick a bunch of virtual switches, and now your shootin’ your weapons!

    The application could be networkable. So while you have your virtual console on the laptop, in sends the input over the local WiFi network to the PC to be controlled, recieved by an application there that provides feedback to the laptop. Letting it know what program has focus to determine the needed interface. I think this works well because most laptops have built-in webcams.

    I am curious how the motion continues to track if the hands are overlapped in view of the camera (like you were holding a rifle, for instance).

    In my opinion, I feel that the biggest problem that may slow such an affordable solution in to a real market would be lack of feedback. The gloves don’t let you ‘feel’ any tactile surface. Maybe a small electrode in the touch surface of each finger and the palm that, in between layers of glove, cause a small pocket of sorts enlarge slightly to create a gentle pressure?

    Eventually the user would become use to the console layouts they’ve created for themselves, and be able to navigate those consoles mostly by memory and tactile feedback, only having to occasionally refer to their onscreen console.

    I suggest a virtual console like this because it adds versatility in a way that allows people to create control-profiles that they can share. Creating a comfort zone encouraging users to contribute to it’s application compatibility and helping new users find premade tools and profiles to jump right in to trying it with their favorite software.

    I would recommend the “basic glove” and the “tactile glove” which could work off of 2 CR/DL2032 Li-Ion cell batteries and a small calculator-quality solar panel to the top-wrist portion of the hand to keep it charging while under room light.

    I’m sure the tactile glove would cost more like $40 per glove, but if it’s durable and effective, I’d pay it! Revolutionize my gaming experience please. ^.^

Comments or Questions?

By clicking the "Submit" button you agree to the eGFI Privacy Policy.