Wednesday, June 06, 2007

Whence do I point, Horatio?

The squirrels...they run so inside my head.

Gah.

Ok, here's the situation. (err...apologies to Will Smith) We're still at the last point. (see below) Trying to figure out where the !$$@#!# the interceptor is pointing at any given time.

Then it came to me...if the camera is mounted ON the interceptor, we just "aim" by putting any moving object in the middle of the video and ker-pow! Simple, right?

Of course not.

There's a reason I didn't take this approach at the beginning. Because...if the camera is mounted on the interceptor, it's indeed easier to target...but it's a heckofa lot harder to detect a moving object. No simple segmentation of comparing background images to the current image because... the background is constantly changing!

Of course, there are ways to get around this. And given my lack of brainstorms on how to solve my other aiming problem, I've been researching just how hard it'd be to negate camera motion and extract moving objects from a moving camera stream. Because I'm just sure someone has done it..!

In theory, it's pretty simple. (start by google "camera motion", "optical flow"). You just need to identify a series of good tracking points in the image (think: corners, contrast...what we call "video texture") Compare their location from one frame to the next, compare all your vectors, assume that those most common are due to camera motion, and you've got your first part.

Then go back and look for motion vectors that didn't match. Those'd be your "moving object".

See? Simple.

Right.

Beaucoup research has been done on this very thing. As a matter of fact, Intel even made available a c library called OpenCV (subsequently opensourced) that has a bunch of useful libraries for accomplishing this very thing.

Of course, it's not c#. So off we go to see if there's a wrapper/converted API for us poor auto-garbage-collection-addicted fools. And the gods did smile and indeed there are! Unfortunately, neither is under active development nor a complete conversion, but them's the breaks.

OpenCVDotNet got me up and running quickly. Good samples, but a bit slim on documentation.

SharperCV seems to be the more complete of the two, both from a documentation and an available function point of view.

I really did like OpenCVDotNet, but in the end I'm looking to SharperCV. It seems to follow the OpenCV format closer, which makes it easier to translate the C-oriented tutorials and samples easier to c#.

However, before I tear the current source to bits and begin a rebuild, I am going to see if I can get some general videos shot with current performance with a static camera in a light-controlled environment.

All that said, I'm beginning to wonder how much longer I'm going to keep going on this particular direction. At this point, I've accomplished several of the original goals of the project.

Learn C#/use decoupled design/try agile practices
Self grade: B+
I've refactored the inner workings several times. I wrote more, and more elaborate, unit tests for this project than pretty much anything to this point. For my level of expertise, I'm pretty happy with the design as well.

That's realizing I did ok for where I was at...the next project will be held to much higher standards.


Provide a coding example I'm proud of
Self Grade: C
Some of the code is nice. commented. Well reasoned and logical.
And some is..well..spaghetti. Tightly coupled components. Parents requiring somewhat intimate knowledge of the children's inner workings. I did do my best to decouple, but I learned as I went. (that's the nice way to put it!)

Exercise programming problem solving muscles
Self grade: A
The muscles are sore. And I'm not benching a metaphorical 300lb programming stack. But I've certainly re-awakened some of those logical decomposition skills.


Do Something Cool
Self Grade: A- : Well, at least >>I<< think it's pretty cool. However, I had visions of a very fast tracker with ominous voices tracking cubicle visitors. The NXT motors and my design just didn't seem to allow that. I'm sure a better design could have alleviated some of the problems. Hrmmm...the NXTShot sure looks to be more responsive. I may just have to do some "mechanical design analysis" for a Mark III version. ;^)


Next up, videos and pix of current performance...

Friday, May 18, 2007

Picking up where we left off...

When you last left your intrepid hero, things were looking up. The system was back up and functional. Das blinkinlights were doing their thing. Life seemed grand.

And then reality.

I hooked everything up, plugged it all in, and remounted the targeting laser. In a fit of brilliant insanity, I then fired up the control program and told the kids to build some lego towers to shoot down.

They oblidged.

And all heck broke loose.

You see...when I hot-glued the targeting laser to the lego beam, it was kinda-sorta fudged on. "Gee, that looks pretty straight" sorta thing.

Well, it wasn't. From 24" away, the interceptor was landing missles at least 2-3" off where the laser was pointing. Not good when the kids are so proud of themselves for actually getting the laser on the target and then the arrow has the GALL to land somewhere off yonder.

I gave everybody and extra turn, and promised to "fix the robot".

Then I scratched my head. However am I going to "calibrate" this laser?!! It's hotglued on fer pete's sake. Long story short, I tried a bunch of lego jigs, but nothing worked quite right. So I resigned myself to removing and re-gluing.

Of course, this necessitated some sort of aligning jig. In a fit of inspiration (I have them often. Fits, that is. Inspiration more occasionally.) I envisioned some sort of calibration jig.

Mounted the laser. Shined it on the brick. Hotglued everything in place.

Ok, that's on the mechanical side. What about on the software side?

I've been ruminating on how to keep a solid lock on the laser from the camera. Camera noise (spurious RGB values from the cheap webcam) have been somewhat of a problem. As well, the inherent nature of 160x120 video (never did solve that) has been an issue.

So I did what any self-respecting engineer would do. I upped the minimum hardware requirements. ;^)

I have an older DV camcorder that will only function as a camera, now. So I bought an inexpensive 1394 PC card from newegg, paired it up with the sony dv camcorder, and now I get BEAUTIFUL CLEAR pictures! Better optics, less noise, AND I can turn off the auto-exposure/brightness in cam. Sweet.

Of course, that's all fine and good...but we still have the "oblique laser" problem.

If the target surface is at a nice 90 degrees to the laser, we get the standard bright red dot.

HOWEVER...if the target is at an oblique angle...



The laser's beam is spread out in a "puddle", and it's overall less noticeable, and likely not the brightest spot.



I had an epiphany today. (I promise, the point is coming soon!)

I happened across "how stuff works" page on the apache helicopter's hellfire missiles. (mmm...missiles) And I was inspired by the targeting method. Apparently, the older hellfires were laser targeted where the missile would seek to whatever target was being painted by the laser. But not just any laser...

A laser that pulsed to a pre-determined code. (that was downloaded to the missile prior to launch)

"Ah-HA!" I thought. If I can somehow parse out the laser (maybe it's got unique HSV values?!!) and match what I think the target point is against some sort of pulse pattern...hmm...

Thinking...thinking...

Friday, April 13, 2007

The pricewars begin...

Ok, now I understand how all those ebay-ers can sell NXT's for a "buy it now!" price of $220 and make money...

Not only is CompUSA selling the NXT for $199, but now Best Buy is doing it for $189 + $6 shipping. (thanks NXT Step!)

Woot!

Those wild thoughts of buying a second and third set are getting correspondingly more tempting... =^)

Wednesday, April 11, 2007

Trackin fool

Refactoring to allow for testing is sooooo a good thing.

I finally had the brainstorm. I'd been developing two separate projects for this. One is down-n-dirty, get 'er in. Test it out. Oops..that's ugly but it works.

And one "nice, pristine, how it's supposed-to-be".

I'll let you guess which one actually made progress. For a while.

The pretty one was essentially used as a testing ground for nunit. And therefore has a nice bunch of unit tests.

And then it happened in the "real" project. Ka-blam. Hit a wall. Why isn't this working?!! It should work. The logic is right. Those d!$#m gremlins are getting in between the parsing and compiling stages. I just know it.

Deep breath. Back off. Put the mouse down. Take a walk.

Then it hits me. I'm having so much trouble (partially) because I'm having to do so much crufty liberal insertion of system.diagnostic.debug.writeline.

(true confessions, here)

And it's a PITA to debug. Because...well...things are so tightly coupled I'm not quite sure WHERE to put in the debug statements and even when I do I'm making assumptions about other pieces working and that leads to...

Well, we all know what happens when you assume.

So in a fit of brilliantly obvious inspiration, I did two things.

1.) I ported the unit tests over to the "really/working but somewhat crufty" implementation.

2.) Did some decoupling. Specifically there were two pieces:
  • determine which way way I'm supposed to go based on where I am and the target is
  • issue a command to move the robot ( myrobot.moveright() )
  • translate the moveright into actual motor directions (move motor b @ 70% power in the left direction)

The third had already been isolated. But I was doing 1 and 2 in the same function. I pulled them apart. Now a parent function says "where should we go?" and the fxn returns a "left, down" or somesuch.

NOW we're cookin. Unit tests were written for each of 4 quadrants, like so:


// $ = target point (where to move to)
//
// --------------------* < -100,100 (max rot values)
// | quad 1  |  quad 2 |
// |         |         |
// |         |         |
// ----------$----------
// | quad 3  | quad 4  |
// |         |         |
// |         |         |
// *--------------------
// ^(100,-75) min rot values


And as the french say, "voilia!" (literally, "let us eat cheese!")

In the end all was happy. Except tracking was much less precise than I'd like. Gear slop and imprecision in initial calibration were affecting things far more than I'd like.

So...where are we at?

The laser tracking worked best for precision...but it suffered from light refractions and went somewhat nuts in anything but a nice, lowlight environment. Oh, and if the beam was scattered or at an angle (think: shining a laser pointer on a table at a steep angle), things went wonky. Chasing butterflies.

Dead reckoning isn't particularly precise...but it's kinda/sorta "in the neighborhood".

Can I combine the two? Maybe limit the search radius for the laser pointer to within x pixels of the estimated dead reckoning solution? It's a thought.

But regardless, it'll be integrated and tested. =^)

In Mechanical news...

The gearing has been reduced and reworked for both pan and tilt. Of course, this adds to the gearlash problem, but it was necessary for precision reasons.

("Slow 'er down!" everyone said.)

(Well, not everyone.)

(Ok, maybe just me.)

Anyway.

But it's working. Back to "das blinkinlights" working.

But now we've got
  • boundary checking (no more chasing butterflies and grinding gears)
  • Four shot rotary magazine
  • Smaller tilt cradle footprint (geez. I should be in marketing. No...no, actually. I shouldn't)
  • A cool "fah-WOOSH!" sound when the missile is fired. (put that in for the kids)
Now it's time to put in code to track values returned from the sonar (ultrasonic) sensor. Fire when the target is within...oh...let's say 50cm.

Ok. Go to it.

Wednesday, April 04, 2007

Avast ye skirvy dogs, protect the staplers at all costs!

...because...you know...that's about the highest aspiration I have for this once it's done.

Oh, and it'll look cool. And I'll have the undying admiration of my coworkers. Some of them. A few. Ok, at least one.

In other news, a reader (from whence he came I know not..!) pointed me to a project he's working on.
http://www.foxbox.nl/lego/index.asp?FRMid=39

Check it out! Similar to JP Brown's Aegis, but with his own unique execution...and he's got pictures...LOTS of pictures, and even a few videos.

Actually, check the site out even if you're not interested in this. This guy rivals Philo for the pure amount-of-stuff that he's built and documented. I particularly liked the automated battery tester.

In general construction news...

Given my own...umm...unique execution of the pan-n-tilt, I may have to crib some of these guys' executions. If nothing else, I'm thinking of rebuilding the firing mechanism with a conventional lego motor for compactness.

Of course, then I lose the ability to control rotation via the built in rotation sensors...drat drat drat...everything's a tradeoff. Hmm. Maybe I'll just attempt to rebuild using the nxt motor and trying to move things more inboard...

In other news, the dead reckoning method for moving stuff around is coming...well...it's coming...umm...yeah.

After more than a bit of headscratching the boundary detection is working. That is, I can calibrate the "aim box" for the beast (max and min horizontal and vertical rotations) and if we start to point outside of those boundaries, the control code catches it and moves back inside.

However, I had two setbacks when translating this to dead reckoning movement.

1.) I've got a @#$!$ bug in the code. Currently whenever I tell it to "track to a point!" it's thinking that the point is somewhere between it's toes and left armpit. Ie, it moves left and down...left and down...always left and down. I've stared at the code till my eyes bugged out and the logic error is still eluding me. But I'll find and squash it. (eventually)

2.) I..umm...err. (this is really embarrassing) I lost my control buttons. Yeah, these:

I was playing with tab layout controls, and I think somewhere I grabbed and moved them. Somewhere. As a group.

And now while Visual Studio thinks they're there (and all the handlers/etc are still present) they are naught to be found on the form.

Of course my (ever bright, savvy, and non-developer) wife's perspective was: "You have backups, right? You keep versions don't you?"

Argh! All that ranting and raving about idiots who code without source control comes crashing down upon my head...she must have actually been listening.

Yes, I do have a previous version in subversion. And it's only about 2-3 days old. But there's still quite a bit of difference 'tween the two.

The painful lessons are the ones best learned. (and this could have been MUCH more painful!)

Cheerio!
-aaron

Tuesday, April 03, 2007

Mindstorms NXT $199 @ CompUSA!


I just happened to be at my local CompUSA about 2 weeks ago looking for (oh the irony!) an inexpensive USB joystick to control the robot.

Low and behold...in the same aisle, facing shelves on the bottom were [da dum!] Mindstorms NXT kits. "Nifty enough", says I, "but what's that little sticker..."

Holy hot hannah batman...$199! (the sticker said "new low price!")

The link above is to the online store, which also reflects the $199 price. Alas, they are all sold out for delivery. However, my (local) store has some in...if you have local CompUSA you might just be as lucky!

I've been itching for a second set already...mostly because I keep getting inspired by other folks' creations and want to give 'em a try without tearing apart my work-in-progress. Oh the agony, your name is Lego...

And the joystick? Eh...didn't buy one...yet. I'll tackle DirectInput and the attendant control issues later.

Saturday, March 31, 2007

Two steps forward, one step back...

It's late. My eyes are tired. By brain is tired.

But it's working again.

The first step to a "dead reckoning" method of tracking was to get readings on the tachometers in the NXT's motors. Ie "number of degrees turned".

Then I had to do something useful. So I decided to use the laser as a guide to defining a "box" for targeting. Target the upper right & lower left, use those "coordinates" as scales for translating screen coordinates to rotation coordinates. (so to speak)

And to make sure I was getting things working properly, I decided to implement some boundary stuff. Ie, if we moved beyond the bounds, stop the movement (say, out of the top of the frame) and nudge things back in.

I did this also (partially) because I've had the robot chase butterflies and grind gears. Totally. Reading the tachs in the motors I decided would help keep things from moving too wildly.

Now I'm finding (of course) new issues. Like I'm getting what seems to be coordinate drift. I'm guessing it has to do with gear lash in the targeting. No biggie.

An even larger issue (though) is this...I thought I'd try targeting the corners of the "video window". And..hoooooboy.

The video motion detection needs some serious optimization. When it's off, my pentium M 1.6ghz chugs along at around 12%cpu. Connect to the camera and things go wonky.

(wonky. That's a technical term)

Anyway. UI becomes unresponsive. Click on a button and...wait...for...a...reaction...oops! There goes therobotandit'sturningALLTHEWAYAROUNDOHCRAP!!!

(because even though I release the button the "button is up you can stop now" message is still waiting behind the video processing queue and...)

Nevermind. I'm tired. I shouldn't be writing.

However, here's the new build. With dropped "low slung" stance and nifty grafted on laser.





Closeup of the laser:



(Yes, that's hot glue. Shhhh....)

Wednesday, March 28, 2007

NXT + webcam + PC = not quite there yet.

...but we're getting closer!

Last night I took a couple of hours and finished up my "Complete redesign from the ground up."

Looks quite a bit different from the first video. A few "minor" changes:
  • Integrated a 4 shot rotary magazine. (borrowed the idea from JP Brown though it took me forever to figure out how to mount the cyberslam missiles!)
  • Improved the base stability.
  • Reworked the turntable mechanism at LEAST 3 times. First time used a conventional 40 tooth gearwheel. (too much slop) Second and third used the NXT turntable. (Had the dickens of a time trying to figure out how to mount and drive it...finally ran across some examples and was able to get some traction)




Pretty cool, eh?

And then I hook it into the motion tracking system...

And it doesn't work.

Moves too quick. Not enough precision. Left/right can possibly be used as is, but direct driving the up/down movement is just tooooooo fast. Additionally, when only applying 10-15% power to the motor to rotate up/down (the idea being to do it sllloooowwwwllly) not enough juice gets to the motor to move it! (especially if the batteries aren't brand-spanking-new.)

Amazing, isn't it, how you can never anticipate the areas that'll really getcha? I hadn't a clue that the pan/tilt would be such a challenge.

Nor did I figure that simply setting the webcam's res up from 160x120 to 320x240 would drive a software refactoring/revision/redesign.

BUT....it's a good thing. The new base is MUCH more stable. The new pan mechanism is rock solid compared to the last.

And the software redesign I used to start writing unit tests for the tracking modules I'd written. Which forced me to rethink some of the design. Which is a good thing.

So...back to the drawing board! And maybe with this rev I can get it a leeeetle more compact. That was another thing...this version is just monster-lovin huge. Wiiide. (though I must admit that it makes it look a bit more imposing. )

Friday, March 23, 2007

Cardinal sins (etc etc)

According to Joel, one should never toss it all out to start from scratch.

Weeeeelllll....I am. But not on the software side. On the hardware side I tore apart the whole thing, deciding I needed a complete redesign.

(All things considered, it does have fewer moving parts than mozilla.)

After getting a very ugly windows app up and running, I realized (belatedly) it was time to refactor. And those unit tests I'd been meaning to write?

(laughs nervously.)

Weeellll...

You get the picture. I've got a morass of self written stuff, some other folks' code, and some hacked up versions of both all in the big happy windows forms pot. It's time to come clean. Cleave truth from fiction. Air out the dirty socks and all that. "Do It Right".

Well, at least "right-er". Before I've got so much spaghetti I'm an honorary Soprano. Oh, and get rid of those nasty arraylists in favor of generics. (after casting my object for the n^40th time, I realized why everyone was so excited...) Because leaving them in...you know...casts a bad light on the family.

First step: New app in VS2005. copy over my developed modules & those modified. Add to source control. (done)
Second step: Writing those unit tests. (in progress). So far the "target tracking" object is now officially unit-ed.
Third step: Separate out the logic from Andrew Kirillov's motion recognition code and try and make it work with this super noofty-cool .net makes-it-easy directshow wrapper doohicky.

Why? Because I still can't figure out (and he couldn't either) how to change the resolution for the incoming video stream easily...and this library makes it a breeze.

(I feel absolutely no compunction about not digging into the nasty-icky-commie (heh) directshow/c++ internals. No thank you.)

What else...oh yes. Need to add in some new logic to the robot targeting guts to see how pairing dead-reckoning with the laser targeting works.

The laser's really cool...but it isn't 100% accurate. I'd say it's about 60-70% accurate depending on the quality of input (my old Intel camera is pretty noisy), the light level, and the environment. A nice dark-ish neutral wall with the lights off and we've got about 95% accurate tracking (or more).

I've got these 3 cool little lasers I came across (pretty similar to the $1 pointer I found). I actually had a wild thought of doing the "Predator 3-dot" thing as the targeting mechanism.

Think: , find the brightest point, look around for 2 more. It'd help cut down on spurious input...hmm...future feature maybe. And, well, not everyone will want to duct-tape 3 laser pointers together.

So we'll try the dead reckoning + laser pointer approach. If nothing else, the laser pointer will help calibrating the dead-reckoning. (ie, point it at the four corners of the camera's viewable area, store the rotation values in the NXT motors controlling pan and tilt)

Hmm...come to think of it, if I have those values stored, I can also avoid the dreaded "we're pointing at the ceiling!" syndrome when the bot would start chasing butterflies (ie flutteringly bad input) past the camera's boundaries.

Monday, March 19, 2007

Houston, we have das blinkin lights!

Oh boy..I am, like SO totally stoked.

(you can tell when I begin to let slip 80's idiom with wild abandon)

(more wild than 80's hair, even.)

(totally)

Anyway. Where was I? Oh yeah. Stoked. Totally, dude.

(I'll stop now.)

Closing in on actually having the robot track a live object. I parsed out the source of Andrew Kirillov's most excellent motion tracking library, enhanced (aka: hacked up something awful) it in order to:

1.) track a laser pointer with more accuracy
2.) calculate a returned object's center of mass

I'm not only stretching my coding skills (unsafe code? Oh...that means you can mess up pointers with wild abandon!) but also dusting dimly remembered image processing theory from back when I did QA on video codecs.

But it's fun.

Alas, all is not without stinkyness. Even after parsing the image capture code from here to eternity, I haven't been able to figure out how to change the @$!$#@$ default input resolution on the #!@$!# old Intel USB cam. It'll do 640x480 @ 15fps or even 320x240 @ 30fps. But does it default to that? Nooooooooo....I get scaled up 160x120! (yeah baby! We're talking Gen-u-ine Indeo 3 quality here! Cinepack here I come!)

In a fit of insanity (what WILL this do to performance?!!) I hooked up my miniDV camcorder (via firewire) Not only did the goodness of the MS capture generics work just fine...it captured BEAUTIFULLY!

Here's a quick shot:



An interesting bit on the laser tracker. You'd think (ha!) that all you'd need to do was grab the "brightest" dot in the image and that'd (of course) be the pointer.

Not necessarily. I wound up having to do some special sauce to track "hotspots" and filter them out. (see that bright brass hinge in the picture above? No? Well if you squint rrREEEeally hard...that bit was especially troublesome) Early results look very promising, but we still need more tuning.

Objects were equally in need of tweaking. The library has a pre-configured tracker that returns objects, a rectangle around them, AND a tracking number.

SUPER handy. Except it's managed code. (I'm thinking). Regardless, it's slow. Well, slower than the "optimized" tracker that returned a pixelated-but-closer object tracked boundary. I somehow managed to hack in a bit of code to feed that pixelated image (well, a black and white version) into the object tracker for blob numbering, use the resulting blobish goodness to figure out a center of mass, and still keep things running pretty well. (cpu isn't smoking yet)

While it's not "aim for the whites of their eyes!" it'll be somewhat more accurate than aiming for the center of a returned rectangle. (or so I'm hoping)

Lessee...what's next..

Develop a class to track an object's last n positions, average velocity and position to estimate what it thinks the next position of the pointer/object should be...and see if the returned tracked coordinates are close.

(In case the laser does take a jump, it'll help the robot track more steadily).

Ie, we'll figure out where we think the object is supposed to be, and if it isn't, we'll just "fake it" for a few frames, see if it returns, and keep on.

The laser tracks reasonably steadily, but the object "center of mass"...not so much. Now that I have the callback code in, that's possible.

Oh yeah, did I mention I had to relearn the whole delegate-message-passing-in-a-threadsafe-way two-step? Good news is it took considerably less time this time. About an hour vs. several days. That helped put things into perspective.

Overall I'm very pleased at how it's coming along. Good, solid accomplishment-reinforcement cycle. Keeps me wanting to push, learning more.

So I wonder how the dog would react to being an "object"...?

Monday, March 12, 2007

Cube Area Missle Defense: Prototype 1

Here Caleb demonstrates the first prototype of the "base unit" + fire control.

I apologize for the crappy video + wonky editing + low res. It was a spur-of-the-moment capture with the family digital (still) camera's movie mode...



It's being controlled via a custom windows form app written in C# and using the most excellent Mindsqualls .net api for NXT.

$1 laser pointer grafted on from a local dollar store.
Missile is a Technic Competition Arrow and launcher. (I bought a bunch from .BrickLink a while back.)

The control at this point is via bluetooth, so it's wireless.

Next steps:
  1. add a webcam control to the windows form app
  2. refactor the physical manifestation (ie the robot) for less gear lash.
  3. abstract out control functions for movement so I can plug in (arbitrarily) programmatic, keyboard, joystick, "forms buttons", or mouse control.
  4. calibration routines to map the camera's field of view to arms range of motion
  5. code to move the "aiming point" to a designated spot.
  6. plug in motion detection
  7. point at the center of mass of a detected movement
  8. experiment with the ultrasonic sensor to see how accurately it'll detect distance...maybe figure out some simple ballistics. (alternately, only fire at an object if it's within a given distance.)
  9. refactor the base with multi-shot capabilities.

Thursday, January 04, 2007

Christmas slowdowns..

...aren't just for UPS!

Atch. We decided to call it quits until the new year (which this, now, is). As my wise partner-in-crime put it:

"We had agreed at the start that this shouldn't devolve into yet another obligation. The intention was as an opportunity for growth, development - and a measure of diversion."

Yup. And we're both buried. So it's on hold for now.

That doesn't mean the idea factory has been idle. I've been musing for a while how to get video to the PC for processing. This looks like just the ticket.