(Or—a peek from the peak)
One of the passages in Daniel Dennett's From Bacteria to Bach and Back in which he comes really really close to cracking the code of reality—or perhaps closer to blowing your mind away into smithermemes... The passage in question is found in the chapter "Consciousness as an Evolved User-Illusion".
Its title is unpromising,
"How do human brains achieve 'global' comprehension using 'local' competences?"
... but the going gets really good with the epigraphs—especially when the epigraphs get glossed and riffed in the text itself. The trip goes from a bon mot to a reflexive hall of mirrors in the funhouse of your brain, so... brace your self for a dive ride into your own mind, and other minds as well:
Language was given to men so that they could conceal their thoughts.
—Charles-Maurice de Talleyrand
Language, like consciousness, only arises from the need, the necessity, of intercourse with others.
—Karl Marx
Consciousness generally has only been developed under the pressure of the necessity for communication.
—Friedrich Nietzsche
There is no General Leslie Groves to organize and command the termites in a termite colony, and there is no General Leslie Groves to organize and command the even more clueless neurons in a human brain. How can human comprehension be composed of the activities of unncomprehending neurons? In addition to all the free-floating rationales that explain our many structures, habits, and other features, there are the anchored reasons we represent to ourselves and others. These reasons are themselves things for us, denizens of our manifest image alongside the trees and clouds and doors and cups and voices and words and promises that make up our ontology. We can do things with these reasons—challenge, reframe, abandon, endorse, disavow them—and these often covert behaviors would not be in our repertoires if we hadn't downloaded all the apps of language into our necktops. In short, we can think about these reasons, good and bad, and this permits them to influence our overt behaviors in ways unknown in other organisms.
The piping plover's distraction display or broken-wing dance gives the fox a reason to alter its course and approach her, but not by getting it to trust her. She may modulate her thrashing to hold the fox's attention, but the control of this modulation does not require her to have more than a rudimentary "appreciation" of the fox's mental state. The fox, meanwhile, need have no more comprehension of just why it embarks on its quest instead of continuing to reconnoiter the are. We, likewise, can perform many quite adroit and retrospectively justifiable actions with only a vague conception of what we are up to, a conception often swiftly sharpened in hindsight by the self- attribution of reasons. It's this last step that is ours alone.
Our habits of self-justification (self-appreciation, self-exoneration, self-consolation, self-glorification, etc.) are ways of behaving (ways of thinking) that we acquire in the course of filling our heads with culture-borne memes, including, importantly, the habits of self-reproach and self-criticism. Thus we learn to plan ahead, to use the practice of reason-venturing and reason-criticizing to presolve some of life's problems, by talking them over with others and with ourselves. And not just talking them over—imagining them, trying out variations in our minds, and looking for flaws. We are not just Popperian but Gregorian creatures (see chapter 5), using thinking tools to design our own future acts. No other animal does that.
Our ability to do this kind of thinking is not accomplished by any dedicated brain structure not found in other animals. There is no "explainer-nucleus" for instance. Our thinking is enabled by the installation of a virtual machine made of virtual machines made of virtual machines. The goal of delineating and explaning this stack of competences via bottom-up neuroscience alone (without the help of cognitive neuroscience) is as remote as the goal of delineating and explaining the collection of apps on your smartphone by a bottom-up deciphering of its hardware circuit design and the bit-strings in memory without taking a peek at the user interface. The user interface of an app exists in order to make the competence accessible to users—people—who can't know, and don't need to know, the intricate details of how it works. The user-illusions of all the apps stored in our brains exist for the same reason: they make our competences (somewhat) accessible to users—other people— who can't know, and don't need to know, the intricate details. And then we get to use them ourselves, under roughly the same conditions, as guests in our own brains.
There might be some other evolutionary path—genetic, not cultural—to a somewhat similar user-illusion in other animals, but I have not been able to conceive of one in convincing detail, and according to the arguments advanced by the ethologist and roboticist David McFarland (1989), "Communication is the only behavior that requires an organism to self-monitor its own control system." Organisms can very effectively control themselves by a collection of competing but "myopic" task controllers, each activated by a condition (hunger or some other need, sensed opportunity, built-in priority ranking, and so on). When a controller's condition outweighs the conditions of the currently active task controller, it interrupts it and takes charge temporarily. (The "pandemonium model" by Oliver Selfridge [1959] is the ancestor of many later models). Goals are represented only tacitly, in the feedback loops that guide each task controller, but without any global or higher level representation. Evolution will tend to optimize the interrupt dynamics of these modules, and nobody's the wiser. That is, there doesn't have to be anybody home to be wiser!
Communication, McFarland claims, is the behavioral innovation which changes all that. Communication requires a central clearing house of sorts in order to buffer the organism from revealing too much about its current state to competitive organisms. As Dawkins and Krebs (1978) showed, in order to understand the evolution of communication we need to see it as grounded in manipulation rather than as purely cooperative behavior. An organisms that has no poker face, that "communicates state" directly to all hearers, is a sitting duck, and will soon be extinct (von Neumann and Morgenstern 1944). What must evolve to prevent this exposure is a private, proprietary communication-control buffer that creates opportunities for guided deception—and, coincidentally, opportunities for self-deception (Trivers 1985)—by creating, for the first time in the evolution of nervous systems, explicit and more globally accessible representations of its current state, representations that are detachable from the tasks they represent, so that deceptive behaviours can be formulated and controlled without interfering with the control of other behaviors.
It is important to realize that by communication, McFarland does not mean specifically linguistic communication (which is ours alone) but strategic communication,
which opens the crucial space between one's actual goals and intentions
and the goals and intentions one attempts to communicate to an
audience. There is no doubt that many species are genetically equipped
with relatively simple communication behaviors (Hauser 1996), such as
stotting, alarm calls, and territorial markings and defense.
Stereotypical deception, such as bluffing in an aggressive encounter, is
common, but a more productive and versatile talent for deception
requires MaFarland's private workspace. For a century and more
philosophers have stressed the "privacy" of our inner thoughts, but
seldom have they bothered to ask which this is such a good design
feature. (An occupational blindness of many philosophers: taking the
manifest image as simply given and never asking what it might have been given to us for).
No hay comentarios:
Publicar un comentario