Footnotes to Plato from the foothills of the Superstition Mountains

Why AI Systems Cannot be Conscious

1) To be able to maintain that AI systems are literally conscious in the way we are, conscious states must be multiply realizable. Consider a cognitive state such as knowing that 7 is a prime number. That state is realizable in the wetware of human brains. The question is whether the same type of state could be realized in the hardware of a computing machine. Keep in mind the type-token distinction. The realization of the state in question (knowing that 7 is prime) is its tokening in brain matter in the one instance, in silica-based matter in the other. This is not possible without multiple realizability of one and the same type of mental state.

2) Conscious states (mental states) are multiply realizable only if functionalism is true. This is obvious, is it not?

3) Functionalism is incoherent.

Therefore:

4) AI systems cannot be literally conscious in the way we are.

That's the argument.  The premise that needs defending is (3).  So let's get to it.

Suppose Socrates Jones is in some such state as that of perceiving a tree. The state is classifiable as mental as opposed to a physical state like that of his lying beneath a tree. What makes a mental state mental? That is the question.

The functionalist answer is that what makes a mental state mental is just the causal role it plays in mediating between the sensory inputs, behavioral outputs, and other internal states of the subject in question. The idea is not the banality that mental states typically (or even always) have causes and effects, but that it is causal role occupancy, nothing more and nothing less, that constitutes the mentality of a mental state. The intrinsic nature of what plays the role is relevant only to its fitness for instantiating mental causal  roles, but not at all relevant to its being a mental state.

Consider a piston in an engine. You can't make a piston out of chewing gum, but being made of steel is no part of what makes a piston a piston. A piston is what it does within the 'economy' of the engine. Similarly, on functionalism, a mental state is what it does. This allows, but does not entail, that a mental state be a brain or CNS state. It also allows, but does not entail, that a mental state be a state of a  computing machine.

To illustrate, suppose my cat Zeno and I are startled out of our respective reveries by a loud noise at time t. Given the differences  between human and feline brains, presumably man and cat are not in type-identical brain states at t.  (One of the motivations for functionalism was the breakdown of the old type-type identity theory of Herbert Feigl, U. T. Place. J. J. C. Smart, et al.)  Yet both man and cat are startled: both are in some sense in the same mental state, even though the states they are in are neither token- nor type-identical. The  functionalist will hold that we are in functionally the same mental state in virtue of the fact that Zeno's brain state plays the same  role in him as my brain state plays in me. It does the same  mediatorial job vis-à-vis sensory inputs, other internal states, and  behavioral outputs in me as the cat's brain state does in him.

On functionalism, then, the mentality of the mental is wholly relational. And as David Armstrong points out, "If the essence of the mental is purely relational, purely a matter of what causal role is played, then the logical possibility remains that whatever in fact plays the causal role is not material." This implies that "Mental states might be states of a spiritual substance." Thus the very feature of functionalism that allows mentality to be realized in computers and nonhuman brains generally, also allows it to be realized in spiritual substances if there are any.

Whether this latitudinarianism is thought to be good or bad, functionalism is a monumentally implausible theory of mind. There are the technical objections that have spawned a pelagic literature: absent qualia, inverted qualia, the 'Chinese nation,' etc. Thrusting these aside, I go for the throat, Searle-style. 

Functionalism is threatened by a fundamental incoherence. The theory states that what makes a state mental is nothing intrinsic to the state, but purely relational: a matter of its causes and effects. In us, these happen to be neural. (I am assuming physicalism for the time being.)  Now every mental state is a neural state, but not every neural state is a mental state. So the distinction between mental and nonmental neural states must be accounted for in terms of a distinction between two different sets of causes and effects, those that contribute to mentality and those that do not. But how make this distinction? How do the causes/effects of mental neural events differ from the causes/effects of nonmental neural events? Equivalently, how do psychologically salient input/output events differ from those that lack such salience?

Suppose the display on my monitor is too bright for comfort and I decide to do something about it. Why is it that photons entering my retina are psychologically salient inputs but those striking the back of my head are not? Why is it that the moving of my hand to to adjust the brightness and contrast controls is a salient output event, while unnoticed perspiration is not?

One may be tempted to say that the psychologically salient inputs are those that contribute to the production of the uncomfortable glare sensation, and the psychologically salient outputs are those that manifest the concomitant intention to make an adjustment. But then the salient input/output events are being picked out by reference to mental events taken precisely NOT as causal role occupants, but as exhibiting intrinsic features that are neither causal nor neural: the glare quale has an intrinsic nature that cannot be resolved into relations to other items, and cannot be identified with any brain state. The functionalist would then be invoking the very thing he is at pains to deny, namely, mental events as having more than neural and causal features.

Clearly, one moves in a circle of embarrassingly short diameter if one says: (i) mental events are mental because of the mental causal roles they play; and (ii) mental causal roles are those whose occupants are mental events.

The failure of functionalism is particularly evident in the case of qualia.  Examples of qualia: felt pain, a twinge of nostalgia, the smell of burnt garlic, the taste of avocado.  Is it plausible to say that such qualia can be exhaustively factored into a neural component and a causal/functional component?  It is the exact opposite of plausible.  It is not as loony as the eliminativist denial of qualia, but it is close.  The intrinsic nature of qualitative mental states is essential to them. It is that intrinsic qualitative nature that dooms functionalism.

Therefore

4) It cannot be maintained with truth that AI systems are literally conscious in the way we are. Talk of computers knowing this or that is metaphorical.


Posted

in

,

by

Tags:

Comments

6 responses to “Why AI Systems Cannot be Conscious”

  1. DaveB Avatar
    DaveB

    “Talk of computers knowing this or that is metaphorical.”
    Agreed.

  2. Anthony Flood Avatar

    As far as I could follow your argument, Bill, I’m relieved. It would be interesting, however, to get, say, Grok’s concurrence. That is, it would be enlightening to see how AI expresses its (non-conscious) “understanding” of consciousness and if, in doing so—that is, in agreeing with or disputing your contention—it gives the game away. Tony

  3. Vlastimil Avatar
    Vlastimil

    Bill, I very much recommend https://www.jsanilac.com/ailom
    Sanilac is always brilliant. You will enjoy him.
    He explains how AI, because not conscious, is eating meaningfulness across many contexts.
    Sadly, I don’t think his proposals (such as, stigmatize those who mis/over/use AI, esp. as substitutes for sex and romance, https://www.jsanilac.com/ultrahumanism) will work.
    His proposals are wholesome but I guess Nick Land is right that the emergent, unintentional powers of technocapital hardly can be stopped.
    As Heidegger said, only some God can save us.
    https://www.facebook.com/vlastimil.vohanka/videos/5093389800885345

  4. BV Avatar
    BV

    Thanks, Vlasta.
    In the first para of the first Sanilac piece, we read, “The computer displays “hello,” but it means nothing.” But of course we must distinguish between sentence/word meaning and speaker’s meaning. The word has a meaning, but there is no person ‘behind’ the word that means anything by the use of the word.
    But then I skimmed the rest of the long piece and found that the author is hip to the distinction which he puts in terms of semantic versus human meaning.
    I don’t have time to read the piece carefully, but others should.
    >>I guess Nick Land is right that the emergent, unintentional powers of technocapital hardly can be stopped.<< I fear that's right. 'Technocapital' is the right word. We've created a monster that is now beyond our control, a monster driven by human greed, competitiveness, and overall depravity. We cannot let ourselves fall behind the ChiComs in the weaponization of AI.

  5. BV Avatar
    BV

    Nur ein Gott kann uns retten are Heidegger’s exact words in his Spiegel interview near the end of his life. A God, not some God. Please forgive the pedantry.
    I did a lot of work on Heidegger in younger days. I need to return to his writings on Technik.
    Ever read Karl Jaspers? The Heidegger-Jaspers relationship fascinates me. My overall position is close to Jaspers’. Currently re-reading Der Philosophische Glaube angesichts der Offenbarung.

  6. Vlastimil Avatar
    Vlastimil

    Bill,
    AI NotebookLM is good for summarizing PDFs, TXTs, videos and audios.
    It’s free. (Pro version is cheap. And gives you Gemini Pro as well.)
    Does not fabulate.
    Has huge token window.
    You must upload the files.
    You can download any site as PDF in your browser.
    Any any YT video by snapany.com
    And any Twitter video by twittervideodownloader.com
    Useful!
    Overuse makes stupid, of course.

    Btw,
    I’ve been hearing this advice a lot:
    Become really good at something they (gov, AI, …) can’t steal from you or simulate quite well (a way of stealing).
    What might that be?
    Thinking for yourself, without the aid or influence of AI?
    Face to face selling, coaching etc?
    Idiosyncratic non-mainstream content? (based on sources, maybe arcane, that AI doesn’t know or more or less ignores)
    Creating physical stuff or art that’s clearly not AI generated? (example: raw cartoon videos by the anon psychologist hoe_math on YT)
    Bitcoin?
    Prepping?
    Looksmaxing?
    Ideas welcome.

Leave a Reply to Vlastimil Cancel reply

Your email address will not be published. Required fields are marked *