According to Joel, one should never toss it all out to start from scratch.
Weeeeelllll....I am. But not on the software side. On the hardware side I tore apart the whole thing, deciding I needed a complete redesign.
(All things considered, it does have fewer moving parts than mozilla.)
After getting a very ugly windows app up and running, I realized (belatedly) it was time to refactor. And those unit tests I'd been meaning to write?
(laughs nervously.)
Weeellll...
You get the picture. I've got a morass of self written stuff, some other folks' code, and some hacked up versions of both all in the big happy windows forms pot. It's time to come clean. Cleave truth from fiction. Air out the dirty socks and all that. "Do It Right".
Well, at least "right-er". Before I've got so much spaghetti I'm an honorary Soprano. Oh, and get rid of those nasty arraylists in favor of generics. (after casting my object for the n^40th time, I realized why everyone was so excited...) Because leaving them in...you know...casts a bad light on the family.
First step: New app in VS2005. copy over my developed modules & those modified. Add to source control. (done)
Second step: Writing those unit tests. (in progress). So far the "target tracking" object is now officially unit-ed.
Third step: Separate out the logic from Andrew Kirillov's motion recognition code and try and make it work with this super noofty-cool .net makes-it-easy directshow wrapper doohicky.
Why? Because I still can't figure out (and he couldn't either) how to change the resolution for the incoming video stream easily...and this library makes it a breeze.
(I feel absolutely no compunction about not digging into the nasty-icky-commie (heh) directshow/c++ internals. No thank you.)
What else...oh yes. Need to add in some new logic to the robot targeting guts to see how pairing dead-reckoning with the laser targeting works.
The laser's really cool...but it isn't 100% accurate. I'd say it's about 60-70% accurate depending on the quality of input (my old Intel camera is pretty noisy), the light level, and the environment. A nice dark-ish neutral wall with the lights off and we've got about 95% accurate tracking (or more).
I've got these 3 cool little lasers I came across (pretty similar to the $1 pointer I found). I actually had a wild thought of doing the "Predator 3-dot" thing as the targeting mechanism.
Think: , find the brightest point, look around for 2 more. It'd help cut down on spurious input...hmm...future feature maybe. And, well, not everyone will want to duct-tape 3 laser pointers together.
So we'll try the dead reckoning + laser pointer approach. If nothing else, the laser pointer will help calibrating the dead-reckoning. (ie, point it at the four corners of the camera's viewable area, store the rotation values in the NXT motors controlling pan and tilt)
Hmm...come to think of it, if I have those values stored, I can also avoid the dreaded "we're pointing at the ceiling!" syndrome when the bot would start chasing butterflies (ie flutteringly bad input) past the camera's boundaries.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment