Will Duquette e-mails and I respond in blue.
Having followed your link to McGinn's review of Kurzweil's book, "How to Create a Mind," it seems to me that there's something McGinn is missing that weakens his critique. Mind you, I agree that Kurzweil is mistaken; but there's a piece of Kurzweil's view of things that McGinn doesn't see (or discounts) that is is crucial to understanding him.
I don't pretend to be an expert on Kurzweil; but I've been a software engineer for over two decades where McGinn has not, and there are some habits of thought common to the computer science community. For example, computer software and hardware are often designed as networks of cooperating subsystems, each of which has its own responsibility, and so we fall naturally into a homunculistic manner of speaking when working out designs. And this is practically useful: it aids communication among designers, even if it is philosophically perilous.
Anyway, here's the point that I would make back to McGinn if I were Kurzweil: patterns outside the brain lead to patterns inside the brain. A digital camera sees a scene in the world through a lens, and uses hardware and software to turn it into a pattern of bits. Other programs can then operate on that pattern of bits, doing (for example) pattern recognition; others can turn the bits back into something visible (e.g., a web browser).
REPLY: McGinn needn't disagree with any of this, though he would bid you be very careful about 'see' and 'recognition.' A digital camera does not literally see anything any more than my eye glasses literally see things. Light bouncing off external objects causes certain changes in the camera which are then encoded in a pattern of binary digits. (I take it that your 'bit' is short for 'binary digit.') And because the camera does not literally see anything, it cannot literally remember what it has (figuratively) 'seen.' The same goes for pattern recognition. Speaking literally, there is no recognition taking place. All that is going on is a mechanical simulation of recognition.
To the extent, then, that sensory images are encoded and stored as data in the brain, the notion that memories (even remembering to buy cat food) might be regarded as patterns and processed by the brain as patterns is quite reasonable.
REPLY: This is precisely what I deny. Memories are intentional experiences: they are of or about something; they are object-directed; they have content. One cannot just remember; in every case to remember is to remember something, e.g., that I must buy cat food. No physical state, and thus no brain state, is object-directed or content-laden. Therefore, memories are not identical to states of the brain such as patterns of neuron firings. Correlated perhaps, but not identical to.
Of course, as you've noted fairly often recently, a pattern of marks on a piece of paper has no meaning by itself, and a pattern of marks, however encoded in the brain, doesn't either. But Kurzweil, like most people these days, seems to have no notion of the distinction between the Sense and the Intellect; he thinks that only the Sense exists, and he, like Thomas Aquinas, puts memories and similar purely internal phenomena in the Sense. I don't think that's unreasonable. The problem is that he doesn't understand that the Intellect is different.
In short, Kurzweil is certainly too optimistic, but he might have a handle on the part of the problem that computers can actually do. He won't be able to program up a thinking mind; but perhaps he might do a decent lower animal of sorts.
REPLY: Again, I must disagree. You want to distinguish between sensing and thinking, and say that while there cannot be mechanical thinkers, there can be mechanical sensors, using 'thinking' and 'sensing' literally. I deny it. Talk of mechanical sensors is figurative only. I have a device under my kitchen sink that 'detects' water leaks. Two points. First, it does not literally sense anything. There is no mentality involved at all. It is a purely mechanical system. When water contacts one part of it, another part of it emits a beeping sound. That is just natural causation below the level of mind. I sense using it as an instrument, just as I see using my glasses as an instrument. I sense — I come to acquire sensory knowledge — that there is water where there ought not be using this contraption as an instrumental extension of my tactile and visual senses. Suppose I hired a little man to live under my sink to report leaks. That dude, if he did his job, would literally sense leaks. But the mechanical device does not literally sense anything. I interpret the beeping as indicating a leak.
The second point is that sensing is intentional: one senses that such-and-such. For example, one senses that water is present. But no mechanical system has states that exhibit original (as opposed to derivative) intentionality. So there can't be a purely mechanical sensor or thinker.
As for homunculus-talk, it is undoubtedly useful for engineering purposes, but one can be easily misled if one takes it literally. McGinn nails it:
Contemporary brain science is thus rife with unwarranted homunculus talk, presented as if it were sober established science. We have discovered that nerve fibers transmit electricity. We have not, in the same way, discovered that they transmit information. We have simply postulated this conclusion by falsely modeling neurons on persons. To put the point a little more formally: states of neurons do not have propositional content in the way states of mind have propositional content. The belief that London is rainy intrinsically and literally contains the propositional content that London is rainy, but no state of neurons contains that content in that way—as opposed to metaphorically or derivatively (this kind of point has been forcibly urged by John Searle for a long time).
And there is theoretical danger in such loose talk, because it fosters the illusion that we understand how the brain can give rise to the mind. One of the central attributes of mind is information (propositional content) and there is a difficult question about how informational states can come to exist in physical organisms. We are deluded if we think we can make progress on this question by attributing informational states to the brain. To be sure, if the brain were to process information, in the full-blooded sense, then it would be apt for producing states like belief; but it is simply not literally true that it processes information. We are accordingly left wondering how electrochemical activity can give rise to genuine informational states like knowledge, memory, and perception. As so often, surreptitious homunculus talk generates an illusion of theoretical understanding.
Leave a Reply to Lucas Nicolato Cancel reply