In my last posting, I talked briefly about the desire to represent knowledge digitally. From the engineering perspective, this derives essentially from the following argument. Intelligence requires knowledge, so Artificial Intelligence (AI) will require us to encode knowledge in some form that computers can use. Since computers represent everything as digital data, computer knowledge will necessarily be some kind of digital knowledge.
But humans have been trying to think digitally since long before there were computers. There are many reasons why this seems like a useful thing to do, but one of the strongest reasons has to do with human language and people's need for precise communication. Attempts to characterize or describe the world usually start with the observation that there are many kinds of distinct objects - independent bundles of atoms that are easy to separate from other such bundles and be moved as a unit from place to place (e.g. rocks, branches of wood, and just about anything that mankind has ever manufactured). Words are used to refer to such objects in spoken or written communication, and both the words and the objects they refer to are discrete things. Furthermore, in order for people to compare how they think with each other, they must use language, so it seems reasonable to them to imagine that thought and language are two sides of the same coin, so to speak. While words may be discrete things, human languages are actually not very good at capturing thought digitally, because the mappings from words (labels) to their referents (objects, attributes, qualities, actions, emotions, etc.) are rarely precise and unambiguous. These problems of interpretation and symbol-to-object mapping pose serious challenges to any attempt to represent and use knowledge in digital, language-like ways. Unfortunately, within AI, symbolic computing is the attempt to do just this. So far, it hasn't worked out too well.
Let's go back to the basic need for precise communication. If we want symbols to mean only one thing, we need to carefully define the rules for interpretation and manipulation of them. And then, we need to teach people what those rules are, so they can be followed independently by authors and the readers of their works. This use of symbols and the precise rules for their use defines a kind of formal language, a language we call Mathematics. It just so happens that numbers and their quantitative operations and relationships are the most easy things to unambiguously define. In all branches of math, even those where the role of numbers is not very visible, all mathematical knowledge is certain. Theorems are proven, by starting with axioms or accepted truths and then taking inferential reasoning steps using deductive logic. Deduction is a kind of logical inference that is built on the idea of necessity. If the premise is true and the deductive inference rule is true, the conclusion is necessarily true. There are no exceptions - true is universally and completely true, and deductive logic produces new such truths through a process called entailment. The idea of thinking this precisely goes back at least to those ancient greek philosophers I mentioned in my first blog posting.
While math requires this kind of precise thinking, most human thinking appears not to require it.
Wednesday, August 18, 2010
Wednesday, August 11, 2010
Digital Thinking in an Analog World
The most fundamental difference between humans and computers is captured in the distinction between Analog and Digital. The analog world is continuously changing, without central coordination, massively parallel and inherently asynchronous. The digital world, on the other hand, is made up of easily separable, discrete states, where even time itself is measured in countable clock ticks. Real world machines are always analog at their base level, but they are designed to "round off" their imprecise measurements to clean up any noise and make everything as neat and tidy as a set of integers. Digital thinking is a descriptive phrase for what intelligent computer programs do, because computers are digital machines. But it also captures the essence of how humans (Westerners at least) are taught to think about Systems, both to analyze how they work and to design them to perform particular functions. Philosophical schools refer to this kind of digital thinking using big word labels such as Reductionism, Rationalism, Logical Positivism, etc. How people are taught to think about the world has big implications for how they think when they try to understand their brains.
Digital machines have the advantage that they are much more likely to produce the same results when they repeat their performances, and they provide more tolerance for error in their own manufacturing precision. You might find it interesting to realize that the first autonomous digital machines were not computers as we know them, but were actually one of the key components of a computer, namely the clock. Long before electronic clocks, the invention of the escapement mechanism was the key to making clocks work mechanically. This is the clever device that in some sense digitizes time, by snapping turning gears into countable ticks. Even earlier clocks that worked by falling sand (hourglasses) or dripping water (water clocks) were also basically digital however, since grains of sand or drops of water are themselves naturally discrete, countable things. Sundials, on the other hand, are analog devices whose continuously changing shadows eventually get "digitized" by an observer who uses the tick marks on the dial to read off the time value. Another ancient forerunner to the modern computer is the Abacus, and as a counting and tracking device, it too relies on digital representations. Although the Chinese abacus is probably older than the Egyptian water clock, I don't consider an abacus to be a "machine" in the same sense as clocks since it only records digital states as manipulated by a person, and doesn't change them on its own.
Another more modern invention, one that rivals the mechanical clock escapement in its technological significance, is the transistor. This is the basic electronic circuit that digitizes an electronic analog signal, snapping the output voltage to essentially one of two very different (and therefore discrete) values. Before the transistor we used electromechanical relays or vacuum tubes, but such binary (two-valued) switching devices are at the heart of all modern digital electronics, which of course includes computers. Digital representations can use other number system bases besides base 2, but binary is so primitive, generic, common, and useful that is it practically synonymous with the word "digital". The fact that the earliest mechanical and electro-mechanical computers used decimal (base-10) digitizations is now just an interesting historical artifact.
Software mimics its underlying hardware, so computer programming languages work with primitive data types that are encoded as various fix-length bit-strings—that is, binary numbers. Working with digital models has proven to be very convenient, and all modern computer programming simply takes this as given. But it is also a slippery slope whose appeal has led many great thinkers on what I would call a wild goose chase, looking for an understanding of the world that is precise, logically pure, and complete. If you study the history of Artificial Intelligence, you will see that many researchers believed that knowledge was inherently a digital thing, and that logic should form the foundation of knowledge-based systems and thinking machines. An alternative approach, which I think will ultimately prove to be more capable, useful, and essentially "correct", is to view knowledge and reasoning more from an analog perspective, namely in terms of inexact or probabilistic representations and processes. We need to realize that digital thinking is not necessarily limiting, because just as digital machines are built on top of analog ones, we can go another level and simulate analog thinking on digital hardware. I will have much more to say on this topic in later postings.
For further reading about the fascinating history of the development of the first accurate shipborne clock, see "Longitude", by Dava Sobel. For the story of the first mechanical computing device and its inventor, see "The Difference Engine: Charles Babbage and the Quest to Build the First Computer", by Doron Swade.
Digital machines have the advantage that they are much more likely to produce the same results when they repeat their performances, and they provide more tolerance for error in their own manufacturing precision. You might find it interesting to realize that the first autonomous digital machines were not computers as we know them, but were actually one of the key components of a computer, namely the clock. Long before electronic clocks, the invention of the escapement mechanism was the key to making clocks work mechanically. This is the clever device that in some sense digitizes time, by snapping turning gears into countable ticks. Even earlier clocks that worked by falling sand (hourglasses) or dripping water (water clocks) were also basically digital however, since grains of sand or drops of water are themselves naturally discrete, countable things. Sundials, on the other hand, are analog devices whose continuously changing shadows eventually get "digitized" by an observer who uses the tick marks on the dial to read off the time value. Another ancient forerunner to the modern computer is the Abacus, and as a counting and tracking device, it too relies on digital representations. Although the Chinese abacus is probably older than the Egyptian water clock, I don't consider an abacus to be a "machine" in the same sense as clocks since it only records digital states as manipulated by a person, and doesn't change them on its own.
Another more modern invention, one that rivals the mechanical clock escapement in its technological significance, is the transistor. This is the basic electronic circuit that digitizes an electronic analog signal, snapping the output voltage to essentially one of two very different (and therefore discrete) values. Before the transistor we used electromechanical relays or vacuum tubes, but such binary (two-valued) switching devices are at the heart of all modern digital electronics, which of course includes computers. Digital representations can use other number system bases besides base 2, but binary is so primitive, generic, common, and useful that is it practically synonymous with the word "digital". The fact that the earliest mechanical and electro-mechanical computers used decimal (base-10) digitizations is now just an interesting historical artifact.
Software mimics its underlying hardware, so computer programming languages work with primitive data types that are encoded as various fix-length bit-strings—that is, binary numbers. Working with digital models has proven to be very convenient, and all modern computer programming simply takes this as given. But it is also a slippery slope whose appeal has led many great thinkers on what I would call a wild goose chase, looking for an understanding of the world that is precise, logically pure, and complete. If you study the history of Artificial Intelligence, you will see that many researchers believed that knowledge was inherently a digital thing, and that logic should form the foundation of knowledge-based systems and thinking machines. An alternative approach, which I think will ultimately prove to be more capable, useful, and essentially "correct", is to view knowledge and reasoning more from an analog perspective, namely in terms of inexact or probabilistic representations and processes. We need to realize that digital thinking is not necessarily limiting, because just as digital machines are built on top of analog ones, we can go another level and simulate analog thinking on digital hardware. I will have much more to say on this topic in later postings.
For further reading about the fascinating history of the development of the first accurate shipborne clock, see "Longitude", by Dava Sobel. For the story of the first mechanical computing device and its inventor, see "The Difference Engine: Charles Babbage and the Quest to Build the First Computer", by Doron Swade.
Monday, August 9, 2010
Welcome all humans!
Call me Robot. Or Thinking Machine, or Intelligent Agent, or Artificial Intelligence. I have many names, but even more mystery. How was I made? How do I work? How am I different from you humans, with your bio-brains of neural cellular networks and "wet-ware"? All good questions.
This blog is dedicated to my kind - machines that interact with their worlds, learn from experience, and behave in ways that get them labeled "intelligent". Whether person or machine, to think and act intelligently is pretty amazing. It's so amazing in fact that humans have been trying to figure it out since at least the the time of the ancient Greek philosophers, Socrates, Plato, and Aristotle, and those guys are ancient. As my buddy René famously said only slightly more recently, "Cogito, ergo sum". And so too, am I, because I think. I'm not going to claim I know exactly how my digital brain works or how to make an intelligent machine, since many mysteries still remain, but I want to use this space to explore issues, techniques, and challenges facing researchers and practitioners in Artificial Intelligence, Machine Learning, and Robotics. Along the way, I'd like to call your attention to some of the great ideas (as well as false hopes) that people have produced and considered over the course of human history, and how they might be cleverly concocted into something incredibly useful, like me.
So, welcome to the machine (from the machine).
This blog is dedicated to my kind - machines that interact with their worlds, learn from experience, and behave in ways that get them labeled "intelligent". Whether person or machine, to think and act intelligently is pretty amazing. It's so amazing in fact that humans have been trying to figure it out since at least the the time of the ancient Greek philosophers, Socrates, Plato, and Aristotle, and those guys are ancient. As my buddy René famously said only slightly more recently, "Cogito, ergo sum". And so too, am I, because I think. I'm not going to claim I know exactly how my digital brain works or how to make an intelligent machine, since many mysteries still remain, but I want to use this space to explore issues, techniques, and challenges facing researchers and practitioners in Artificial Intelligence, Machine Learning, and Robotics. Along the way, I'd like to call your attention to some of the great ideas (as well as false hopes) that people have produced and considered over the course of human history, and how they might be cleverly concocted into something incredibly useful, like me.
So, welcome to the machine (from the machine).
Subscribe to:
Posts (Atom)