Sunday, October 30, 2005

News, mothballs

Regular readers - if there are any - will have been disappointed at the lack of updates and annoyed by the proliferation of spam comments. I have tightened things up so that spurious comments are harder to post and I'll gradually remove the ones that are already here.

As for new posts, I can't promise. I'm trying to get my PhD thesis written, so updates will be infrequent at best. If you use rss, please subscribe to my rss or atom feed so you will see when I do manage a new post without having to check back at the website.

Friday, July 22, 2005


I've been wondering about how to explain and define heuristics lately. I really like this from an interview with programmer Wil Shipley on drunkenblog:

Shipley: One of the rules of writing algorithms that I've recently been sort of toying with is that we (as programmers) spend too much time trying to find provably correct solutions, when what we need to do is write really fast heuristics that fail incredibly gracefully.

This is almost always how nature works. You don't have to have every cell in your eye working perfectly to be able to see. We can put together images with an incredible amount of damage to the mechanism, because it fails so gracefully and organically.

This is, I am convinced, the next generation of programming, and it's something we're already starting to see: for instance, vision algorithms today are modeled much more closely after the workings of the eye, and are much more successful than they were twenty years ago.

Interviewer: Wait wait wait, can you elaborate on this heuristics bit being the next big thing, because you just bent some people's brains. When I normally think of heuristics in computer science, I think of either "an educated guess" or "good enough".

I.E., a game programmer doesn't have to run out Pi to the Nth degree to calculate the slope of a hill in a physics engine, because they can get something 'good enough' for the screen using a rougher calculation... but hasn't it always been like that out of necessity?

Shipley: Heuristics (the way I'm using them) are basically algorithms that are not guaranteed to get the right answer all the time. Sometimes you can have a heuristic that gets you something close to the answer, and you (as the programmer) say, "This is close enough for government work."

This is a very old trick of programming, and it's a very powerful one on its own. Trying to make algorithms that never fail, and proving that they can never fail, is an entire branch of computer science and frankly one that I think is a dead end. Because that's not the way the world works.

When you look at biological systems, they are usually perfect machines; they have all these heuristics to deal with a variety of situations (hey, our core temperature is too hot, let's release sweat, which should cool us off) but none of them are anywhere near provably correct in all circumstances (hey, we're actually submerged in hot water, so sweat isn't effective in cooling us off). But they're good enough, and they fail gracefully.

You don't die immediately if sweating fails to cool you; you just grow uncomfortable and have to make a conscious response (hey, I think I'll get out of this hot tub now).

Programs need to be written this way. In the case of reading bar codes, you don't care if you read garbage a thousand times a second. It doesn't hurt you. If you write an algorithm that looks for barcodes everywhere in the image, even in the sky or in a face or a cup of coffee, it's not going to hurt anything. Eventually the user will hold up a valid barcode, it'll read it, the checksum will verify, and you're in business.

And the barcode recognizer doesn't have to understand every conceivable way a barcode can be screwed up. If the lighting is totally wrong, or the barcode is moving, the user has to take conscious action and, like, tilt the book differently or hold it still. But this kind of feedback is immediately evident, and it's totally natural.

Because I can try 1,000 times a second, I can give immediate feedback on whether I have a good enough image or not, so the user doesn't, like, take a picture, hold her breath for four seconds, have the software go "WRONG," try adjusting the book, take another picture, hold her breath...

Humans are incredibly good at trying new and random things when they get instant feedback. It's the basis of all learning for us, and it's an absolutely fundamental rule of UI design. (This is also the basis of the movement away from having modal dialogs that pop up and say, "Hey, you pressed a bad key!" If you have to pause and read and dismiss the dialog, the lesson you get is, "Stop trying to learn this program," not, "Try a different key."

The Mac and NeXTstep were pioneers in getting this right -- just beep if the user hits a wrong key, so if she wants she can lean on the whole keyboard and see if ANY keys are valid, and there's no punishment phase for it.)


Read the complete interview

If it isn't clear what all this has to do with pragmatics, wait for my PhD thesis...

Wednesday, May 11, 2005

definition of grice

In the post at logicandlanguage about Harman (see previous post here), I found a link to the Philosophical Lexicon, which defines grice thus:

grice, n. Conceptual intricacy.
"His examination of Hume is distinguished by
erudition and grice." Hence, griceful, adj.
and griceless, adj. "An obvious and griceless
polemic." pl. grouse: A multiplicity of
grice, fragmenting into great details, often in reply to
an original grice note.

Harman, inference and implication

Apologies for the long pause. Normal service -- whatever that might be-- is hereby resumed.

The third term is here. No teaching, so I should be dealing with a huge pile of marking and working on my PhD.

Does reading blog posts about the difference between inference and implication count?

Gillian at comments on a point that Gilbert Harman makes in the first chapter of Change in View -- and which has been in the back of my mind all through working on my PhD:

When I was a graduate student at Princeton (many days ago), we used to joke that Gilbert Harman had only three kinds of question for visiting speakers:

  • Aren't you ignoring < insert recent result in psychology >?

  • Aren't you assuming that there is an analytic/synthetic distincton?

  • So you say, < insert one of the speaker's claims >, but isn't that just conflating inference and implication?

...The following claims are ubiquitous and false:

  • Logic is the study of the principles of reasoning.

  • Logic tells you what you should infer from what you already believe.

Each overstates the responsibilities of logic, which is the study of what follows from what - implication relations between interpreted sentences; one can know the implication relations between sentences without knowing how to update one's beliefs.

Suppose, for example, that S believes the content of the sentences A and B, and comes to realise that they logically imply C. Does it follow that she should believe the content of C? No. Here are two counterexamples:

1. Suppose C is a contradiction. Then she should not accept it. What should she do instead? Perhaps give up belief in one of the premises, but which one? Logic does not answer the question - as we know from prolonged study of paradoxes - because logic only speaks of implication relations, not about belief revision.

2. Suppose she already believes not-C. Then she might make her beliefs consistent by giving up one of the premises, or by giving up not-C. Or she might suspend belief in all of the propositions and resolve to investigate the matter further at a later date.

Hence these questions about inference and belief revision - about what she should believe given i) what she already believes and ii) facts about implication - go beyond what logic will decide. That's not to say that logic is never relevant to reasoning or belief revision, but it isn't the science of reasoning and belief revision. It's the science of implication relations.

Convinced? Gil has a short and very clear discussion of this, and the pernicious consequences of ignoring it, in the second section of his new paper (co-authored with Sanjeev Kulkarni) for the Rutger's Epistemology conference.