Print

Kinect to your applications



Patrick Hynds
Email
January 10, 2012 —  (Page 2 of 3)

Building from the ground up
The development tools and the infrastructure are catching up quickly to help us build touch-enabled applications. For example, software development tool vendor DevExpress is committed to touch for all products, supporting even a touch-capable grid control.

One of the key challenges is to not force touch support into a project by shoehorning the functionality into existing interfaces, but to take advantage of the new capability to enrich the experience and make using the application easier. InterKnowlogy bundled some of its experience into a scatter control that enables multi-touch (provided of course the hardware supports it). It can be found here. Knowing what tools are available is an important part to being in a position to use these new interfaces when the opportunity arises. A really great example of leveraging the strengths of the Surface is InterKnowlogy’s Warehouse Commander application, which takes data from Microsoft Dynamics and allows a shipping warehouse to optimize bin placement for greater efficiency, all with a very visual, touch-enabled application. You can see a full demo of that program to get a glimpse at this line-of-business application that leverages the strengths of the Surface quite well.

Touch interfaces are cool, but they have their limits. When you look beyond touch interfaces, things get a bit less defined. For example, swiping a finger across a display or interface-enabled surface is clearly touch, but what about putting an object on a tabletop device? This is a common element of the demos we see with future interfaces, and the Surface even has a tag system to support this kind of interaction. Then there are waves, pointing, and any number of other motions that are referred to as spatial gestures. In techie circles, we tend to call these “Minority Report”-style interfaces after the Steven Spielberg movie starring Tom Cruise that depicts characters using interfaces that are just one step short of "Star Trek's" Holodeck.

The more correct term is to refer to it as a NUI (Natural User Interface), and the Holodeck is the best example of where things could be heading, though probably not in my lifetime. The common theme when NUI is discussed is invisible, or non-intrusive. By that definition, the Kinect gets us most of the way there. By this definition, the goggle screens with gloves from years past also fit the NUI category technically, but they fail on the non-intrusive point. Microsoft gets it, but as always, it needed competitors to force it down the road.

In fact, right around the time Microsoft was prototyping the first Surface device, Spielberg worked with Microsoft to help understand futuristic interfaces while making "Minority Report." As previously mentioned, this film is widely referenced as the poster child for NUI interfaces, with depictions of the protagonist plucking virtual objects in a virtual reality interface (i.e. performing spatial gestures to control the system). Spatial gestures in the Microsoft world are the province of the Kinect, which will be discussed in detail later in the article.

Neil Roodyn, director of nsquared, talks about a system he has worked on where the user is immersed in a building and can manipulate fixtures, including swapping out doorknobs and moving lamps and other furniture with gestures. The system he displays supports touch through tablet integration, but it also goes beyond that to what he calls “vision systems.” By this, he means a system that understands the context on the environment as a person seeing it might—not only knowing that something is placed on a touch surface, but what was put there and by whom.

During a talk that Roodyn delivered, he used the Kinect as the provider of a rich vision system for accomplishing that environmental understanding thanks to the three cameras, an infrared array and a microphone array it provides. During that same presentation, he pointed out a pretty amazing video that Corning produced over a year ago titled "A Day Made of Glass." It showed where the company imagines it could take its products in a way that leverages the touch interface. You can see it yourself.

The Surface is a different market than the other technologies discussed. It costs thousands of dollars, making it very expensive when compared to the Kinect or even a touch-enabled tablet, but it was never meant to be a mass-market device. Conversely, the Kinect was really never meant to be a general-purpose input device.

Kinect, a game-changer
The Kinect is a whole different animal, and is likely a major game-changer in much the same way the iPhone was for touch going mainstream. The entry level for the user on the Kinect is much lower, with a price point under US$200.

It was originally called Project Natal, and I first heard about it when someone pointed me at a video that showed it as the answer to the Nintendo Wii game system, but raises the bar by taking the controller out of the picture. It is turning out to be a game-changer that is surprising everyone, including Microsoft. It one-upped the Wii's motion-based interface, but added the concept of not requiring a controller of any kind.

Nintendo certainly deserves some of the credit for putting us on this road, as Microsoft does its best innovation in response to a competitive threat, and the Wii control and balance board set the stage. The Kinect was dubbed the “fastest-selling consumer electronics device” after selling 8 million units in the first 60 days.

It was not long before developers started to hack at the device to figure out how it worked and how it could be leveraged outside of the Xbox 360. In fact, there are all kinds of videos and guides from about a year ago showing how to rig up the USB-like plug from the Kinect so that it could be connected to a PC USB port and be supplied external power. This all seemed to catch Microsoft by surprise, and since it all actually violated the terms of use, there was an expectation that there might be a backlash from Microsoft.

Rather than clamp down on these early innovators, Microsoft adapted by developing a plan to create and release a Windows SDK to let developers put the Kinect to work for Windows applications. The Kinect for Windows SDK, along with resources such as tutorials, is available at www.kinectforwindows.org. The current set of bits supports Windows 7 and the Windows 8 Developer Preview, and is for non-commercial purposes with the commercial version promised in “early 2012.”

We will discuss what exactly the SDK provides in a bit. There is one small bit of hardware beyond the Kinect sensor bar itself that you will need if you want to embark on playing with the Kinect SDK, and it has to do with the USB-like interface of the Kinect sensor. You do not have to break out your soldering iron to try it out because Microsoft and third-party provider Nyko have made available an adapter that lets you plug the Kinect into older Xbox 360s that do not have the proper connector for the Kinect.

I suspect that the vast majority of people buying these today are using them to adapt the Kinect to a PC for programming purposes rather than for older Xboxes. The adapters also provide power via a plug since the Kinect plug powers the sensor as well. That is a great deal for $25 to $35 since you can avoid messing with trying to build your own.

The Kinect for Windows SDK still has Microsoft Research’s fingerprints all over it. That is by no means a bad thing since the most jaw-dropping, cool things from Microsoft typically have their start at Microsoft Research. The first thing you have to do when starting a project that leverages the Kinect is to add a reference to the Windows.Research.Kinect DLL. The SDK comes with samples that are a great help in getting started, including the Shape Game and the Skeletal Viewer.

The source code for these two samples will be invaluable to jump-starting your understanding of how to make use of the capabilities provided by the SDK. The latest version of the Kinect for Windows SDK provides access to raw data streams from the depth sensor, color camera and the microphone array that consists of four microphones. This is the source of the ocean of data alluded to in my conversation with InterKnowlogy's Coon.

One of the key functions is skeletal tracking, since whole-body tracking is a great way to provide control, and both of the sample programs make good use of it. Many of the improvements to the latest version are to the skeletal tracking system, including a speed and accuracy boost. There is also now support for losing connectivity with the Kinect without losing everything (a problem with the past version).


Related Search Term(s): Kinect, Microsoft

Pages 1 2 3 


Share this link: http://sdt.bz/36225
 


Comments


02/29/2012 10:35:47 PM EST

I wrote an on Perceptive Pixel a while back, and it seems that their multi-touch tech and the iPhone multi-touch are the same thing, both of which Apple own the pantet for.Still, Microsoft with their endless stacks of cash might have found a way to tip-toe around that pantet.

MaliNanang


close
NEXT ARTICLE
SD Times Blog: Microsoft Open Tech turns one
Subsidiary advances Microsoft's cooperative stance toward open source Read More...
 
 
 




News on Monday  more>>
Android Developer News  more>>
SharePoint Tech Report  more>>
Big Data TechReport  more>>

   
 
 

 


Download Current Issue
APRIL 2014 PDF ISSUE

Need Back Issues?
DOWNLOAD HERE

Want to subscribe?