In my last posting, I talked briefly about the desire to represent knowledge digitally. From the engineering perspective, this derives essentially from the following argument. Intelligence requires knowledge, so Artificial Intelligence (AI) will require us to encode knowledge in some form that computers can use. Since computers represent everything as digital data, computer knowledge will necessarily be some kind of digital knowledge.
But humans have been trying to think digitally since long before there were computers. There are many reasons why this seems like a useful thing to do, but one of the strongest reasons has to do with human language and people's need for precise communication. Attempts to characterize or describe the world usually start with the observation that there are many kinds of distinct objects - independent bundles of atoms that are easy to separate from other such bundles and be moved as a unit from place to place (e.g. rocks, branches of wood, and just about anything that mankind has ever manufactured). Words are used to refer to such objects in spoken or written communication, and both the words and the objects they refer to are discrete things. Furthermore, in order for people to compare how they think with each other, they must use language, so it seems reasonable to them to imagine that thought and language are two sides of the same coin, so to speak. While words may be discrete things, human languages are actually not very good at capturing thought digitally, because the mappings from words (labels) to their referents (objects, attributes, qualities, actions, emotions, etc.) are rarely precise and unambiguous. These problems of interpretation and symbol-to-object mapping pose serious challenges to any attempt to represent and use knowledge in digital, language-like ways. Unfortunately, within AI, symbolic computing is the attempt to do just this. So far, it hasn't worked out too well.
Let's go back to the basic need for precise communication. If we want symbols to mean only one thing, we need to carefully define the rules for interpretation and manipulation of them. And then, we need to teach people what those rules are, so they can be followed independently by authors and the readers of their works. This use of symbols and the precise rules for their use defines a kind of formal language, a language we call Mathematics. It just so happens that numbers and their quantitative operations and relationships are the most easy things to unambiguously define. In all branches of math, even those where the role of numbers is not very visible, all mathematical knowledge is certain. Theorems are proven, by starting with axioms or accepted truths and then taking inferential reasoning steps using deductive logic. Deduction is a kind of logical inference that is built on the idea of necessity. If the premise is true and the deductive inference rule is true, the conclusion is necessarily true. There are no exceptions - true is universally and completely true, and deductive logic produces new such truths through a process called entailment. The idea of thinking this precisely goes back at least to those ancient greek philosophers I mentioned in my first blog posting.
While math requires this kind of precise thinking, most human thinking appears not to require it.
Wednesday, August 18, 2010
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment