1) To be able to maintain that AI systems are literally conscious in the way we are, conscious states must be multiply realizable. Consider a cognitive state such as knowing that 7 is a prime number. That state is realizable in the wetware of human brains. The question is whether the same type of state could be realized in the hardware of a computing machine. Keep in mind the type-token distinction. The realization of the state in question (knowing that 7 is prime) is its tokening in brain matter in the one instance, in silica-based matter in the other. This is not possible without multiple realizability of one and the same type of mental state.
2) Conscious states (mental states) are multiply realizable only if functionalism is true. This is obvious, is it not?
3) Functionalism is incoherent.
Therefore:
4) AI systems cannot be literally conscious in the way we are.
That's the argument. The premise that needs defending is (3). So let's get to it.
Suppose Socrates Jones is in some such state as that of perceiving a tree. The state is classifiable as mental as opposed to a physical state like that of his lying beneath a tree. What makes a mental state mental? That is the question.
The functionalist answer is that what makes a mental state mental is just the causal role it plays in mediating between the sensory inputs, behavioral outputs, and other internal states of the subject in question. The idea is not the banality that mental states typically (or even always) have causes and effects, but that it is causal role occupancy, nothing more and nothing less, that constitutes the mentality of a mental state. The intrinsic nature of what plays the role is relevant only to its fitness for instantiating mental causal roles, but not at all relevant to its being a mental state.
Consider a piston in an engine. You can't make a piston out of chewing gum, but being made of steel is no part of what makes a piston a piston. A piston is what it does within the 'economy' of the engine. Similarly, on functionalism, a mental state is what it does. This allows, but does not entail, that a mental state be a brain or CNS state. It also allows, but does not entail, that a mental state be a state of a computing machine.
To illustrate, suppose my cat Zeno and I are startled out of our respective reveries by a loud noise at time t. Given the differences between human and feline brains, presumably man and cat are not in type-identical brain states at t. (One of the motivations for functionalism was the breakdown of the old type-type identity theory of Herbert Feigl, U. T. Place. J. J. C. Smart, et al.) Yet both man and cat are startled: both are in some sense in the same mental state, even though the states they are in are neither token- nor type-identical. The functionalist will hold that we are in functionally the same mental state in virtue of the fact that Zeno's brain state plays the same role in him as my brain state plays in me. It does the same mediatorial job vis-à-vis sensory inputs, other internal states, and behavioral outputs in me as the cat's brain state does in him.
On functionalism, then, the mentality of the mental is wholly relational. And as David Armstrong points out, "If the essence of the mental is purely relational, purely a matter of what causal role is played, then the logical possibility remains that whatever in fact plays the causal role is not material." This implies that "Mental states might be states of a spiritual substance." Thus the very feature of functionalism that allows mentality to be realized in computers and nonhuman brains generally, also allows it to be realized in spiritual substances if there are any.
Whether this latitudinarianism is thought to be good or bad, functionalism is a monumentally implausible theory of mind. There are the technical objections that have spawned a pelagic literature: absent qualia, inverted qualia, the 'Chinese nation,' etc. Thrusting these aside, I go for the throat, Searle-style.
Functionalism is threatened by a fundamental incoherence. The theory states that what makes a state mental is nothing intrinsic to the state, but purely relational: a matter of its causes and effects. In us, these happen to be neural. (I am assuming physicalism for the time being.) Now every mental state is a neural state, but not every neural state is a mental state. So the distinction between mental and nonmental neural states must be accounted for in terms of a distinction between two different sets of causes and effects, those that contribute to mentality and those that do not. But how make this distinction? How do the causes/effects of mental neural events differ from the causes/effects of nonmental neural events? Equivalently, how do psychologically salient input/output events differ from those that lack such salience?
Suppose the display on my monitor is too bright for comfort and I decide to do something about it. Why is it that photons entering my retina are psychologically salient inputs but those striking the back of my head are not? Why is it that the moving of my hand to to adjust the brightness and contrast controls is a salient output event, while unnoticed perspiration is not?
One may be tempted to say that the psychologically salient inputs are those that contribute to the production of the uncomfortable glare sensation, and the psychologically salient outputs are those that manifest the concomitant intention to make an adjustment. But then the salient input/output events are being picked out by reference to mental events taken precisely NOT as causal role occupants, but as exhibiting intrinsic features that are neither causal nor neural: the glare quale has an intrinsic nature that cannot be resolved into relations to other items, and cannot be identified with any brain state. The functionalist would then be invoking the very thing he is at pains to deny, namely, mental events as having more than neural and causal features.
Clearly, one moves in a circle of embarrassingly short diameter if one says: (i) mental events are mental because of the mental causal roles they play; and (ii) mental causal roles are those whose occupants are mental events.
The failure of functionalism is particularly evident in the case of qualia. Examples of qualia: felt pain, a twinge of nostalgia, the smell of burnt garlic, the taste of avocado. Is it plausible to say that such qualia can be exhaustively factored into a neural component and a causal/functional component? It is the exact opposite of plausible. It is not as loony as the eliminativist denial of qualia, but it is close. The intrinsic nature of qualitative mental states is essential to them. It is that intrinsic qualitative nature that dooms functionalism.
Therefore
4) It cannot be maintained with truth that AI systems are literally conscious in the way we are. Talk of computers knowing this or that is metaphorical.
One response