Monday, March 19, 2007

Houston, we have das blinkin lights!

Oh boy..I am, like SO totally stoked.

(you can tell when I begin to let slip 80's idiom with wild abandon)

(more wild than 80's hair, even.)

(totally)

Anyway. Where was I? Oh yeah. Stoked. Totally, dude.

(I'll stop now.)

Closing in on actually having the robot track a live object. I parsed out the source of Andrew Kirillov's most excellent motion tracking library, enhanced (aka: hacked up something awful) it in order to:

1.) track a laser pointer with more accuracy
2.) calculate a returned object's center of mass

I'm not only stretching my coding skills (unsafe code? Oh...that means you can mess up pointers with wild abandon!) but also dusting dimly remembered image processing theory from back when I did QA on video codecs.

But it's fun.

Alas, all is not without stinkyness. Even after parsing the image capture code from here to eternity, I haven't been able to figure out how to change the @$!$#@$ default input resolution on the #!@$!# old Intel USB cam. It'll do 640x480 @ 15fps or even 320x240 @ 30fps. But does it default to that? Nooooooooo....I get scaled up 160x120! (yeah baby! We're talking Gen-u-ine Indeo 3 quality here! Cinepack here I come!)

In a fit of insanity (what WILL this do to performance?!!) I hooked up my miniDV camcorder (via firewire) Not only did the goodness of the MS capture generics work just fine...it captured BEAUTIFULLY!

Here's a quick shot:



An interesting bit on the laser tracker. You'd think (ha!) that all you'd need to do was grab the "brightest" dot in the image and that'd (of course) be the pointer.

Not necessarily. I wound up having to do some special sauce to track "hotspots" and filter them out. (see that bright brass hinge in the picture above? No? Well if you squint rrREEEeally hard...that bit was especially troublesome) Early results look very promising, but we still need more tuning.

Objects were equally in need of tweaking. The library has a pre-configured tracker that returns objects, a rectangle around them, AND a tracking number.

SUPER handy. Except it's managed code. (I'm thinking). Regardless, it's slow. Well, slower than the "optimized" tracker that returned a pixelated-but-closer object tracked boundary. I somehow managed to hack in a bit of code to feed that pixelated image (well, a black and white version) into the object tracker for blob numbering, use the resulting blobish goodness to figure out a center of mass, and still keep things running pretty well. (cpu isn't smoking yet)

While it's not "aim for the whites of their eyes!" it'll be somewhat more accurate than aiming for the center of a returned rectangle. (or so I'm hoping)

Lessee...what's next..

Develop a class to track an object's last n positions, average velocity and position to estimate what it thinks the next position of the pointer/object should be...and see if the returned tracked coordinates are close.

(In case the laser does take a jump, it'll help the robot track more steadily).

Ie, we'll figure out where we think the object is supposed to be, and if it isn't, we'll just "fake it" for a few frames, see if it returns, and keep on.

The laser tracks reasonably steadily, but the object "center of mass"...not so much. Now that I have the callback code in, that's possible.

Oh yeah, did I mention I had to relearn the whole delegate-message-passing-in-a-threadsafe-way two-step? Good news is it took considerably less time this time. About an hour vs. several days. That helped put things into perspective.

Overall I'm very pleased at how it's coming along. Good, solid accomplishment-reinforcement cycle. Keeps me wanting to push, learning more.

So I wonder how the dog would react to being an "object"...?