Got a chance to play with a Kinect recently, I know.. about 2 years after everyone else. :)
Anyway, I wanted to get started prototyping/testing really quickly, and as of late I have come to think that the most accessible and quickest way is with javascript and 2dcanvas or WebGL.
So I found KinectJS, that makes it really simple to get started. Just run the server and through a websocket you get access to the nodes, etc.
There are others of course, like as3NUI if you want to go with as3. Or obviously c# and c++ examples that comes with the sdk.
Since this is really hard to show online, I have tried to screenrecord some of the tests.
The first thing was just to try and get the nodes showing.
So this is just a canvas plotting the nodes as a skeleton:
Next thing I tried was some headtracking and moving a camera around accordingly to get some sort “holographic” effect.
It´s filmed with a crappy mobile-camera, sorry for that. But you should get the idea:
Then the most obvious geek-thing, control a light saber with your arm:
Test to emit particles from your hands, feels good:
Draw some trails:
Always wondered what it feels like to be a flower :D
Then a test with connecting some box2d stuff to the nodes:
Jumped around a bit to much while tesing that, broke my fucking lamp…
Then an attempt at making something more installation-like. The idea is to project this on to one wall of a room. When it detects a person it lights up like a long corridor and extends the room. And then uses headtracking to change the camera position so the perspective is correct from that persons pov. That´s the theory anyway.
Also to have some sort of interactivity and some references in the extened room, you can “throw” some balls by moving your hands in Z above a certain velocity, the balls then “inherits” the velocity of the hand.
And a variation with an object you can spin around by moving your hands.
So I have mostly tested the Kinect as an input device(which is what is.. duh), but I mean I haven´t touched the video and depthmap stuff.
And as an input device it certainly got some pros and cons. Like doing precission stuff and for example controlling something with lots of accuracy is really hard. I guess also partly cause there is no tactile feedback(a bit of the same problem as touchscreens have imo). As a result control interfaces have to be quite forgiving(for example like this)…
But on the other hand there are stuff that can feel really nice and responsive, like for example emitting particles or drawing and stuff like that. I guess you could say, creating or influencing something with motion in a void feels nicer than trying to control something with motion in a void. :)
In our business it certainly got potential for installations and similar stuff, have obviously just peaked at the surface here. Was lots of fun though!