Friday, July 22, 2005

Heuristics

I've been wondering about how to explain and define heuristics lately. I really like this from an interview with programmer Wil Shipley on drunkenblog:







Shipley: One of the rules of writing algorithms that I've recently been sort of toying with is that we (as programmers) spend too much time trying to find provably correct solutions, when what we need to do is write really fast heuristics that fail incredibly gracefully.


This is almost always how nature works. You don't have to have every cell in your eye working perfectly to be able to see. We can put together images with an incredible amount of damage to the mechanism, because it fails so gracefully and organically.


This is, I am convinced, the next generation of programming, and it's something we're already starting to see: for instance, vision algorithms today are modeled much more closely after the workings of the eye, and are much more successful than they were twenty years ago.



Interviewer: Wait wait wait, can you elaborate on this heuristics bit being the next big thing, because you just bent some people's brains. When I normally think of heuristics in computer science, I think of either "an educated guess" or "good enough".


I.E., a game programmer doesn't have to run out Pi to the Nth degree to calculate the slope of a hill in a physics engine, because they can get something 'good enough' for the screen using a rougher calculation... but hasn't it always been like that out of necessity?



Shipley: Heuristics (the way I'm using them) are basically algorithms that are not guaranteed to get the right answer all the time. Sometimes you can have a heuristic that gets you something close to the answer, and you (as the programmer) say, "This is close enough for government work."


This is a very old trick of programming, and it's a very powerful one on its own. Trying to make algorithms that never fail, and proving that they can never fail, is an entire branch of computer science and frankly one that I think is a dead end. Because that's not the way the world works.


When you look at biological systems, they are usually perfect machines; they have all these heuristics to deal with a variety of situations (hey, our core temperature is too hot, let's release sweat, which should cool us off) but none of them are anywhere near provably correct in all circumstances (hey, we're actually submerged in hot water, so sweat isn't effective in cooling us off). But they're good enough, and they fail gracefully.


You don't die immediately if sweating fails to cool you; you just grow uncomfortable and have to make a conscious response (hey, I think I'll get out of this hot tub now).


Programs need to be written this way. In the case of reading bar codes, you don't care if you read garbage a thousand times a second. It doesn't hurt you. If you write an algorithm that looks for barcodes everywhere in the image, even in the sky or in a face or a cup of coffee, it's not going to hurt anything. Eventually the user will hold up a valid barcode, it'll read it, the checksum will verify, and you're in business.


And the barcode recognizer doesn't have to understand every conceivable way a barcode can be screwed up. If the lighting is totally wrong, or the barcode is moving, the user has to take conscious action and, like, tilt the book differently or hold it still. But this kind of feedback is immediately evident, and it's totally natural.


Because I can try 1,000 times a second, I can give immediate feedback on whether I have a good enough image or not, so the user doesn't, like, take a picture, hold her breath for four seconds, have the software go "WRONG," try adjusting the book, take another picture, hold her breath...


Humans are incredibly good at trying new and random things when they get instant feedback. It's the basis of all learning for us, and it's an absolutely fundamental rule of UI design. (This is also the basis of the movement away from having modal dialogs that pop up and say, "Hey, you pressed a bad key!" If you have to pause and read and dismiss the dialog, the lesson you get is, "Stop trying to learn this program," not, "Try a different key."


The Mac and NeXTstep were pioneers in getting this right -- just beep if the user hits a wrong key, so if she wants she can lean on the whole keyboard and see if ANY keys are valid, and there's no punishment phase for it.)


...



Read the complete interview



If it isn't clear what all this has to do with pragmatics, wait for my PhD thesis...

1 comment:

mikel said...

When I use a computer and I mess up by losing a file or what I have written I feel that I have just dropped of the edge of a cliff and there is no way to get back up if I wanted to. A lot of the time I did not have the opportunity to save the file because I believe that it would be an incompentant computer programmer who would not have saved my file automatically. This pertains to thinking because I believe that when it is saved automatically we are using computers that are secure and that the makers would want to make them as secure as possible. It is discrimination because of the way I think. If I write something good, why the hell do I have to rely on someone else who thinks differently to make sure it does not get erased?