At first glance, it's easy to mistake this Tsukuba University-born project for something you would see on the Kinect. However, there's one vital characteristic that sets it apart from its peers: The still-unnamed system, which allows a user to manipulate virtual blocks by recognizing the movements of their hands and fingers, doesn't use traditional markers or sensors.
Instead, it operates by locating the 2D view that most closely resembles the image derived from the camera before outputting 3D data based on that. Put another way, the system is outfitted with a huge database filled with information about 3D hand shapes and 2D data about "how hands look," which is what allows it to recognize your movements and work its magic. This happens in real-time, by the way.
According to DigiInfo TV, the team responsible for the system wants to make gesture-based PC operations a reality:
"In other words, now that 3D TV has come into the home, we don't want to settle for just watching 3D pictures. We also want to enable 3D operation of PCs. When you use a PC in 3D, the 3D icons will appear to float in midair. By manipulating them through gestures, you could open and delete files, or enlarge and reduce things."
Head on over to DigInfo TV for more, including a video of the system in action.
A plan for developing a 'practical system for reading 3D e-books' is currently in the works as well. Personally, though, I think that they should probably fly up to Stockholm to talk to Notch about integrating their system into Minecraft, but that's just me.
Cassandra Khaw is an entry-level audiophile, a street dancer, a person who writes about video games for a living, and someone who spends too much time on Twitter.