Saturday, March 31, 2007

Two steps forward, one step back...

It's late. My eyes are tired. By brain is tired.

But it's working again.

The first step to a "dead reckoning" method of tracking was to get readings on the tachometers in the NXT's motors. Ie "number of degrees turned".

Then I had to do something useful. So I decided to use the laser as a guide to defining a "box" for targeting. Target the upper right & lower left, use those "coordinates" as scales for translating screen coordinates to rotation coordinates. (so to speak)

And to make sure I was getting things working properly, I decided to implement some boundary stuff. Ie, if we moved beyond the bounds, stop the movement (say, out of the top of the frame) and nudge things back in.

I did this also (partially) because I've had the robot chase butterflies and grind gears. Totally. Reading the tachs in the motors I decided would help keep things from moving too wildly.

Now I'm finding (of course) new issues. Like I'm getting what seems to be coordinate drift. I'm guessing it has to do with gear lash in the targeting. No biggie.

An even larger issue (though) is this...I thought I'd try targeting the corners of the "video window". And..hoooooboy.

The video motion detection needs some serious optimization. When it's off, my pentium M 1.6ghz chugs along at around 12%cpu. Connect to the camera and things go wonky.

(wonky. That's a technical term)

Anyway. UI becomes unresponsive. Click on a button and...wait...for...a...reaction...oops! There goes therobotandit'sturningALLTHEWAYAROUNDOHCRAP!!!

(because even though I release the button the "button is up you can stop now" message is still waiting behind the video processing queue and...)

Nevermind. I'm tired. I shouldn't be writing.

However, here's the new build. With dropped "low slung" stance and nifty grafted on laser.





Closeup of the laser:



(Yes, that's hot glue. Shhhh....)

Wednesday, March 28, 2007

NXT + webcam + PC = not quite there yet.

...but we're getting closer!

Last night I took a couple of hours and finished up my "Complete redesign from the ground up."

Looks quite a bit different from the first video. A few "minor" changes:
  • Integrated a 4 shot rotary magazine. (borrowed the idea from JP Brown though it took me forever to figure out how to mount the cyberslam missiles!)
  • Improved the base stability.
  • Reworked the turntable mechanism at LEAST 3 times. First time used a conventional 40 tooth gearwheel. (too much slop) Second and third used the NXT turntable. (Had the dickens of a time trying to figure out how to mount and drive it...finally ran across some examples and was able to get some traction)




Pretty cool, eh?

And then I hook it into the motion tracking system...

And it doesn't work.

Moves too quick. Not enough precision. Left/right can possibly be used as is, but direct driving the up/down movement is just tooooooo fast. Additionally, when only applying 10-15% power to the motor to rotate up/down (the idea being to do it sllloooowwwwllly) not enough juice gets to the motor to move it! (especially if the batteries aren't brand-spanking-new.)

Amazing, isn't it, how you can never anticipate the areas that'll really getcha? I hadn't a clue that the pan/tilt would be such a challenge.

Nor did I figure that simply setting the webcam's res up from 160x120 to 320x240 would drive a software refactoring/revision/redesign.

BUT....it's a good thing. The new base is MUCH more stable. The new pan mechanism is rock solid compared to the last.

And the software redesign I used to start writing unit tests for the tracking modules I'd written. Which forced me to rethink some of the design. Which is a good thing.

So...back to the drawing board! And maybe with this rev I can get it a leeeetle more compact. That was another thing...this version is just monster-lovin huge. Wiiide. (though I must admit that it makes it look a bit more imposing. )

Friday, March 23, 2007

Cardinal sins (etc etc)

According to Joel, one should never toss it all out to start from scratch.

Weeeeelllll....I am. But not on the software side. On the hardware side I tore apart the whole thing, deciding I needed a complete redesign.

(All things considered, it does have fewer moving parts than mozilla.)

After getting a very ugly windows app up and running, I realized (belatedly) it was time to refactor. And those unit tests I'd been meaning to write?

(laughs nervously.)

Weeellll...

You get the picture. I've got a morass of self written stuff, some other folks' code, and some hacked up versions of both all in the big happy windows forms pot. It's time to come clean. Cleave truth from fiction. Air out the dirty socks and all that. "Do It Right".

Well, at least "right-er". Before I've got so much spaghetti I'm an honorary Soprano. Oh, and get rid of those nasty arraylists in favor of generics. (after casting my object for the n^40th time, I realized why everyone was so excited...) Because leaving them in...you know...casts a bad light on the family.

First step: New app in VS2005. copy over my developed modules & those modified. Add to source control. (done)
Second step: Writing those unit tests. (in progress). So far the "target tracking" object is now officially unit-ed.
Third step: Separate out the logic from Andrew Kirillov's motion recognition code and try and make it work with this super noofty-cool .net makes-it-easy directshow wrapper doohicky.

Why? Because I still can't figure out (and he couldn't either) how to change the resolution for the incoming video stream easily...and this library makes it a breeze.

(I feel absolutely no compunction about not digging into the nasty-icky-commie (heh) directshow/c++ internals. No thank you.)

What else...oh yes. Need to add in some new logic to the robot targeting guts to see how pairing dead-reckoning with the laser targeting works.

The laser's really cool...but it isn't 100% accurate. I'd say it's about 60-70% accurate depending on the quality of input (my old Intel camera is pretty noisy), the light level, and the environment. A nice dark-ish neutral wall with the lights off and we've got about 95% accurate tracking (or more).

I've got these 3 cool little lasers I came across (pretty similar to the $1 pointer I found). I actually had a wild thought of doing the "Predator 3-dot" thing as the targeting mechanism.

Think: , find the brightest point, look around for 2 more. It'd help cut down on spurious input...hmm...future feature maybe. And, well, not everyone will want to duct-tape 3 laser pointers together.

So we'll try the dead reckoning + laser pointer approach. If nothing else, the laser pointer will help calibrating the dead-reckoning. (ie, point it at the four corners of the camera's viewable area, store the rotation values in the NXT motors controlling pan and tilt)

Hmm...come to think of it, if I have those values stored, I can also avoid the dreaded "we're pointing at the ceiling!" syndrome when the bot would start chasing butterflies (ie flutteringly bad input) past the camera's boundaries.

Monday, March 19, 2007

Houston, we have das blinkin lights!

Oh boy..I am, like SO totally stoked.

(you can tell when I begin to let slip 80's idiom with wild abandon)

(more wild than 80's hair, even.)

(totally)

Anyway. Where was I? Oh yeah. Stoked. Totally, dude.

(I'll stop now.)

Closing in on actually having the robot track a live object. I parsed out the source of Andrew Kirillov's most excellent motion tracking library, enhanced (aka: hacked up something awful) it in order to:

1.) track a laser pointer with more accuracy
2.) calculate a returned object's center of mass

I'm not only stretching my coding skills (unsafe code? Oh...that means you can mess up pointers with wild abandon!) but also dusting dimly remembered image processing theory from back when I did QA on video codecs.

But it's fun.

Alas, all is not without stinkyness. Even after parsing the image capture code from here to eternity, I haven't been able to figure out how to change the @$!$#@$ default input resolution on the #!@$!# old Intel USB cam. It'll do 640x480 @ 15fps or even 320x240 @ 30fps. But does it default to that? Nooooooooo....I get scaled up 160x120! (yeah baby! We're talking Gen-u-ine Indeo 3 quality here! Cinepack here I come!)

In a fit of insanity (what WILL this do to performance?!!) I hooked up my miniDV camcorder (via firewire) Not only did the goodness of the MS capture generics work just fine...it captured BEAUTIFULLY!

Here's a quick shot:



An interesting bit on the laser tracker. You'd think (ha!) that all you'd need to do was grab the "brightest" dot in the image and that'd (of course) be the pointer.

Not necessarily. I wound up having to do some special sauce to track "hotspots" and filter them out. (see that bright brass hinge in the picture above? No? Well if you squint rrREEEeally hard...that bit was especially troublesome) Early results look very promising, but we still need more tuning.

Objects were equally in need of tweaking. The library has a pre-configured tracker that returns objects, a rectangle around them, AND a tracking number.

SUPER handy. Except it's managed code. (I'm thinking). Regardless, it's slow. Well, slower than the "optimized" tracker that returned a pixelated-but-closer object tracked boundary. I somehow managed to hack in a bit of code to feed that pixelated image (well, a black and white version) into the object tracker for blob numbering, use the resulting blobish goodness to figure out a center of mass, and still keep things running pretty well. (cpu isn't smoking yet)

While it's not "aim for the whites of their eyes!" it'll be somewhat more accurate than aiming for the center of a returned rectangle. (or so I'm hoping)

Lessee...what's next..

Develop a class to track an object's last n positions, average velocity and position to estimate what it thinks the next position of the pointer/object should be...and see if the returned tracked coordinates are close.

(In case the laser does take a jump, it'll help the robot track more steadily).

Ie, we'll figure out where we think the object is supposed to be, and if it isn't, we'll just "fake it" for a few frames, see if it returns, and keep on.

The laser tracks reasonably steadily, but the object "center of mass"...not so much. Now that I have the callback code in, that's possible.

Oh yeah, did I mention I had to relearn the whole delegate-message-passing-in-a-threadsafe-way two-step? Good news is it took considerably less time this time. About an hour vs. several days. That helped put things into perspective.

Overall I'm very pleased at how it's coming along. Good, solid accomplishment-reinforcement cycle. Keeps me wanting to push, learning more.

So I wonder how the dog would react to being an "object"...?

Monday, March 12, 2007

Cube Area Missle Defense: Prototype 1

Here Caleb demonstrates the first prototype of the "base unit" + fire control.

I apologize for the crappy video + wonky editing + low res. It was a spur-of-the-moment capture with the family digital (still) camera's movie mode...



It's being controlled via a custom windows form app written in C# and using the most excellent Mindsqualls .net api for NXT.

$1 laser pointer grafted on from a local dollar store.
Missile is a Technic Competition Arrow and launcher. (I bought a bunch from .BrickLink a while back.)

The control at this point is via bluetooth, so it's wireless.

Next steps:
  1. add a webcam control to the windows form app
  2. refactor the physical manifestation (ie the robot) for less gear lash.
  3. abstract out control functions for movement so I can plug in (arbitrarily) programmatic, keyboard, joystick, "forms buttons", or mouse control.
  4. calibration routines to map the camera's field of view to arms range of motion
  5. code to move the "aiming point" to a designated spot.
  6. plug in motion detection
  7. point at the center of mass of a detected movement
  8. experiment with the ultrasonic sensor to see how accurately it'll detect distance...maybe figure out some simple ballistics. (alternately, only fire at an object if it's within a given distance.)
  9. refactor the base with multi-shot capabilities.