According to Dan Dennett (1987) there are different strategies for predicting the future behavior of systems. A successful strategy to predict the behavior of a physical system is the ‘physical stance’, which works like this:
Another strategy is the ‘design stance’ from which you assume that a certain design enables you to predict that the system will “behave as it is designed to behave” (Dennett 1987: 17). Examples are alarm clocks, computers, or thermostats, where you can gain insight about their function by analyzing the mechanics behind it, or observe the way they work. The 'design stance' is riskier than the physical stance, because first I only suppose that the artifact I encounter works like I think it does, and second, the artifact can be misdesigned or be victim to a malfunction, whereas the laws of physics do not simply do that. (Dennett 1999).
The riskiest stance is the ‘intentional stance’, and
If one zombie, call him Pierre, would engage in a predicting contest with a human, he would need much more information to predict what would happen if someone went to get cigarettes than a human treating the cigarette-getter as an intentional system and taking into account the patterns in human behavior.
So why does this strategy work, and how? First, in the course of evolution, humans evolved to use these predictive strategies because they worked, or as Quine puts it
Poirier et al. (2005) have an updated idea concerning how these predictive strategies might work, and present their speculations with considerations from an embodied evolutionary-developmental computational cognitive neuroscience (there, I said it again) viewpoint, which I will, finally, discuss in my next post.
References:
Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, MA: Bradford Books.
Dennet, Daniel C. 1999. "The Intentional Stance." The MIT Encyclopedia of the Cognitive Sciences. Eds.Robert A. Wilson and Frank C. Keil. Cambridge, MA: MIT Press.
Poirier, Pierre, Benoit Hardy-Vallée and Jean-Frédéric Depasquale. 2005. “Embodied
Categorization.” Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Amsterdam: Elsevier.
Quine, Willard van Orman. 1953. From a Logical Point of View. Cambridge, MA: Harvard University Press.
“determine its physical constitution (perhaps all the way down to the microphysical level) and the physical nature of the impingements upon it, and use your knowledge of the laws of physics to predict the outcome to any input.” (Dennett 1987: 16).For example, to predict that if I lose grip of a stone I hold in my hand it will fall down, we use the physical stance (Dennett 1999).
Another strategy is the ‘design stance’ from which you assume that a certain design enables you to predict that the system will “behave as it is designed to behave” (Dennett 1987: 17). Examples are alarm clocks, computers, or thermostats, where you can gain insight about their function by analyzing the mechanics behind it, or observe the way they work. The 'design stance' is riskier than the physical stance, because first I only suppose that the artifact I encounter works like I think it does, and second, the artifact can be misdesigned or be victim to a malfunction, whereas the laws of physics do not simply do that. (Dennett 1999).
The riskiest stance is the ‘intentional stance’, and
“Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do” (Dennett 1987: 17).In animate agents, the intentional stance comes very close what we call “Theory of Mind”. Although risky, the ‘intentional stance’ is also incredibly powerful. To illustrate this, Dennett engages in a pretty interesting Gedankenexperiment: If there were Martians – to modify the idea a little, let’s say, zombies – able to predict every future state of the universe, and therefore every action of human beings through a complete knowledge of physics, without treating humans as ‘intentional systems’, they would still miss something.
If one zombie, call him Pierre, would engage in a predicting contest with a human, he would need much more information to predict what would happen if someone went to get cigarettes than a human treating the cigarette-getter as an intentional system and taking into account the patterns in human behavior.
So why does this strategy work, and how? First, in the course of evolution, humans evolved to use these predictive strategies because they worked, or as Quine puts it
“creatures inveterately wrong in their inductions have a pathetic but praiseworthy tendency to die out before reproducing their kind.” (Quine 1953).According to evolutionary epistemology, natural selection ensures a “fit” between our cognitive mechanisms and the world, at least asymptotically, because the closest approximation of epistemological mechanisms and reality has the greatest survival value. (Some aspects of these thoughts are also important in the “Social Brain Hypothesis”, which I will write about some time in the future.) This probably also holds true for the evolution of the intentional stance/theory of mind. But how does “the machinery which nature has provided us” (Dennett 1987: 33) work? Dennett himself (albeit cautionary) proposes that there may be a connection between the exploding complex combinatorics of mind-reading/the prediction of complex behaviors and the generative, combinatorial properties of language/the language of thought.
Poirier et al. (2005) have an updated idea concerning how these predictive strategies might work, and present their speculations with considerations from an embodied evolutionary-developmental computational cognitive neuroscience (there, I said it again) viewpoint, which I will, finally, discuss in my next post.
References:
Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, MA: Bradford Books.
Dennet, Daniel C. 1999. "The Intentional Stance." The MIT Encyclopedia of the Cognitive Sciences. Eds.Robert A. Wilson and Frank C. Keil. Cambridge, MA: MIT Press.
Poirier, Pierre, Benoit Hardy-Vallée and Jean-Frédéric Depasquale. 2005. “Embodied
Categorization.” Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Amsterdam: Elsevier.
Quine, Willard van Orman. 1953. From a Logical Point of View. Cambridge, MA: Harvard University Press.
No comments:
Post a Comment