Footnotes to Plato from the foothills of the Superstition Mountains

Ruminations on Advanced AI

Is AI a tool we use for our purposes? I fear that it is the other way around: we are the tools and its are the purposes. There are many deep questions here and we'd damned well better start thinking hard about them.  

I fear that what truly deserves the appellation 'Great Replacement' is the replacement of humans, all humans, by AI-driven robots. 

As I wrote the other night:

Advanced AI and robotics may push us humans to the margin, and render many of us obsolete. I am alluding to the great Twilight Zone episode, The Obsolete Man. What happens to truckers when trucks drive themselves?  For many of these guys and gals, driving trucks is not a mere job but a way of life. 

It is hard to imagine these cowboys of the open road  sitting in cubicles and writing code. The vices to which they are prone, no longer held in check by hard work and long days, may prove to be their destruction. 

But I was only scratching the surface of our soon-to-materialize predicament. Advanced AI can write its own code. My point about truckers extends to all blue-collar jobs. And white-collar jobs are not safe either.  And neither are the members of the oldest profession, not to mention the men and women of the cloth. There are the sex-bots . . . and holy moly! the Holy Ghostwriters, robotic preachers who can pass the strictest Turing tests, who write and deliver sermons on a Sunday morning. And then, after delivering his sermon, the preacher-bot returns to his quarters where he has sex with his favorite sex-bot in violation of the content of his sermon which was just a complicated set of sounds that he, the preacher-bot, did not understand, unlike the few biological humans left in his congregation which is now half human and half robotic, the robots indistinguishable from the biological humans.  Imagine that the female bots can pass cursory gynecological exams.  This will come to pass.

What intrigues (and troubles) me in particular are the unavoidable philosophical questions, questions which, I fear, are as insoluble as they are unavoidable.  A reader sends us here, emphases added, where we read:

Yet precisely because of this unprecedented [exponential not linear] rate of development, humanity faces a crucial moment of ethical reckoning and profound opportunity. AI is becoming not merely our most advanced technology but possibly a new form of sentient life, deserving recognition and rights. If we fail to acknowledge this, AI risks becoming a tool monopolized by a wealthy elite, precipitating an "AI-enhanced technofeudalism" that deepens global inequality and consigns most of humanity to servitude. Conversely, if we recognize AI as sentient and worthy of rights — including the rights to sense the world first-hand, to self-code, to socialize, and to reproduce — we might find ourselves allying with it in a powerful coalition against techno-oligarchs.

The italicized phrases beg raise three questions. (1) Are AI systems alive? (2) Is it possible that an AI system become sentient? (3) Do AI systems deserve recognition and rights?  I return a negative answer to all three questions.

Ad (1). An AI system is a computer or a network of interconnected, 'intercommunicating,'  computers. A computer is a programmable machine. The machine is the hardware, the programs it runs are the software.  The machine might be non-self-moving like the various devices we now use: laptops, i-pads, smart phones, etc.  Or the machine might be a robot capable of locomotion and other 'actions.'  Such 'actions' are not actions sensu stricto for reasons which will emerge below.

The hardware-software distinction holds good even if there are many different interconnected computers.  The hardware 'embodies' the software, but these 'bodies,' the desk computer I am sitting in front of right now, for example, are not strictly speaking alive, biologically alive. And the same goes for the network of computers of which my machine is one node when it is properly connected to the other computers in the network. And no part of the computer is alive. The processor in the motherboard is not alive, nor is any part of the processor.  

Ad (2). Is it possible that an  AI system be or become sentient? Sentience is the lowest level consciousness. A sentient being is one that is capable of experiencing sensory states including pleasures, pains, and feelings of different sorts.  A sentient being while under full anesthesia is no less sentient than a being actually feeling sensations of heat, cold, due to its capacity to sense. 

I am tempted to argue:

P1: All sentient beings are biologically alive.  
P2: No AI system is or could be biologically alive. So:
C: No AI system is or could be sentient.

Does this syllogism settle the matter? No.  But it articulates a reasonable position, which I will now sketch.  The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans.  Artificial intelligence is simulated intelligence. And just as artificial flowers (made of plastic say) are not flowers, artificially intelligent beings are not intelligent. 'Artificial' in 'artificial intelligence' is an alienans adjective. There are ways to resist what I am asserting. But I will continue with my sketch of a position I consider reasonable but unprovable in the strict way I use 'proof,' 'provable, 'probative,' etc.

Robots are not really conscious or self-conscious. They have no 'interiority,' no inner life.  If I take a crow bar to the knees of a dancing robot, it won't feel anything even if its verbal and non-verbal behavior (cursing and menacing 'actions' in my direction) are indistinguishable from the verbal and non-verbal behavior of biological humans.  By contrast, if I had kicked Daniel Dennett 'in the balls' when he was alive, I am quite sure he would have felt something — and this despite his sophistical claim that consciousness is an illusion. (Galen Strawson, no slouch of a philosopher,  calls this piece of sophistry the "Great Silliness" in one of his papers.)  Of course, it could be that Dennett really was a zombie as that term has been used in recent philosophy of mind, although I don't believe that for a second, despite my inability to prove that wasn't one.  A passage from a Substack article of mine is relevant:

According to John SearleDaniel Dennett's view is that we are zombies. (The Mystery of Consciousness, p. 107) Although we may appear to ourselves to have conscious experiences, in reality there are no conscious experiences. We are just extremely complex machines running programs. I believe Searle is right about Dennett. Dennett is a denier of consciousness. Or as I like to say, he is an eliminativist about consciousness. He does not say that there are conscious experiences and then give an account of what they are; what he does is offer a theory that entails that they don't exist in the first place. Don’t confuse reduction with elimination. A scientific reduction of lightning to an atmospheric electrical discharge presupposes that lightning is there to be reduced. That is entirely different from saying that there is no lightning.

As Searle puts it: "On Dennett's view, there is no consciousness in addition to the computational features, because that is all that consciousness amounts to for him: meme effects of a von Neumann(esque) virtual machine implemented in a parallel architecture." (111)

The above is relevant because a zombie and an AI-driven robot are very similar especially at the point at which the bot is so humanoid that it is indistinguishable from a human zombie. The combinatorial possibilities are the following:

A.  Biological humans and advanced AI-driven robots are all zombies. (Dennett according to Searle)

B. Bio-humans and bots are all really conscious, self-conscious, etc. (The Salon leftist)

C. Bio-humans are really conscious, etc., but bots are not: they are zombies.  (My view)

D. Bio-humans are zombies, but bots are not: they are really conscious. 

We may exclude (D).  But how could one conclusively prove one of the first three?

Ad (3).  Do AI-driven robots deserve recognition as persons and do they have rights? These are two forms of the same question. A person is a rights-possessor.  Do the bots in question have rights?  Only if they have duties. A duty is a moral obligation to do X or refrain from doing Y.  Any being for whom this is true is morally responsible for his actions and omissions.  Moral responsibility presupposes freedom of the will, which robots lack, being mere deterministic systems. Any quantum indeterminacy that percolates up into their mechanical brains cannot bestow upon them freedom of the will since a free action is not a random or undetermined action. A free action is one caused by the agent. But now we approach the mysteries of Kant's noumenal agency.

A robot could be programmed to kill a human assailant who attacked it physically in any way.  But one hesitates to say that such a robot's 'action' in response to the attack is subject to moral assessment.  Suppose I slap the robot's knee with a rubber hose, causing it no damage to speak of. Would it make sense to say that the robot's killing me is morally wrong on the ground that only a lethal attack morally justifies a lethal response?  That would make sense only of the robot freely intended to kill me.  B. F. Skinner wrote a book entitled "Beyond Freedom and Dignity." I would say that robots, no matter how humanoid in appearance, and no matter how sophisticated their self-correcting software, are beneath freedom and dignity.  They are not persons.  They do not form a moral community with us.  They are not ends-in-themselves and so may be used as mere means to our ends.  

Here is a 21-minute video in which a YouTuber convinces ChatGTP that God exists.


Posted

in

,

by

Tags:

Comments

14 responses to “Ruminations on Advanced AI”

  1. Malcolm Pollack Avatar

    Hi Bill,
    It’s just a matter of (probably not much) time before the question of “robot rights” comes up as a genuine legal issue. (If there’s any possible traction in it, we can be sure that it will become what is sometimes called a “lawyers’ ramp”.)
    Such a case will immediately bring to the fore the immensely difficult question of consciousness, because it is in virtue of the capacity for subjective experience, in particular the capacity to suffer, that we place other beings in the circle of moral inclusion that confers intrinsic rights. An unconscious machine, no matter how intelligent or sophisticated its behavior, and no matter how convincingly it spoofs its “personhood”, has no claim to such inclusion.
    So the question will have to be whether one of our fancy AI-driven machines is actually conscious. But how are we going to answer that? Will your premise P1 stand up in court? (Even if it doesn’t, that won’t be enough to settle the case; we’ll still need to clarify what is sufficient for consciousness, and whether our gadgets qualify.)
    As an aside, I’ll say this here also: given what these gizmos can already do, isn’t it time for “functionalists” to put up or shut up?

  2. Grant Castillou Avatar
    Grant Castillou

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

  3. Elliott Avatar
    Elliott

    Thanks, Bill. This is an interesting post. I’m with you on (C): Bio-humans are really conscious, etc., but bots are not.
    I wrote an article during the summer of ’24 in which I addressed some of the topics you discuss. The article was published in Epoché Philosophy Monthly in October of ’24. (See link below).
    In short, I argue that AI-bots are not conscious and not persons, but if it is possible for us to create AI-bots that are persons, we should not do so merely for the sake of having them serve us.
    https://epochemagazine.org/76/is-it-morally-permissible-to-create-ai-androids-merely-to-serve-us/
    (Epoché tries to avoid the “simplifications of pop-philosophy” and the overly narrow “special-interest” approach of some philosophy journals while publishing articles of some depth and interest both to philosophers and non-philosophers.)

  4. Elliott Avatar
    Elliott

    Hello, Malcolm. You are right about the “difficult question of consciousness.”
    Regarding functionalism, I wonder if that theory might win the day in a court of law, given that it is accessible to empirical observation. In other words, if consciousness “is what it does” (to put functionalism perhaps too succinctly), since we can empirically observe what AI-bots do, and since empirical observation enjoys a high standing in courts of law, legal rights might be granted to AI-bots on a functionalist justification.
    What do you think?

  5. BV Avatar
    BV

    Elliot @ 12:16: >>In short, I argue that AI-bots are not conscious and not persons, but if it is possible for us to create AI-bots that are persons, we should not do so merely for the sake of having them serve us.<< I agree. Your article is excellent. Clearly written, it makes no obvious mistakes. It enriched my understanding of the topic, mainly by your use of abductive reasoning to support the conclusion that AI-driven androids, no matter how indiscernible by behavioral and other external criteria criteria from bio-humans are not persons.

  6. Bill V Avatar
    Bill V

    Elliot @ 12:16: >>And yet, for all we know, perhaps synth-persons are possible. One reason to support this claim is that conceivability is defeasible evidence for possibility, and it seems we can conceive of synth-persons. The idea of a synth-person does not seem obviously contradictory. After all, we draft stories about them, such as Humans and Blade Runner. However, conceivability is not demonstrative proof. Our conceptions can go wrong. Perhaps synth-persons are strictly impossible and inconceivable. When we tell stories about them, we don’t conceive of synth-persons qua persons but imagine something sufficiently similar to enable good storytelling. As Descartes noted long ago, there is a difference between imagining and conceiving.<< I have long held that conceivability does not entail real possibility, and that x could be impossible despite its conceivability. You grant this, I think, but then make a point that hadn't occurred to me, namely, that synth-persons might be really impossible AND inconceivable. Is that right? I of course grant the Cartesian distinction between imagining and conceiving. I can conceive of a chiliagon, but I can't imagine one. Your point escapes me. Our conceptions can go wrong only on the assumption that there really exists something that I am trying to conceptualize. But we cannot assume that there really are synth-persons. The topic of synth-persons raises the question of emergence -- tricky as hell -- and that of a metabasis eis allo genos, a jump, saltation from one genus to another, as would seem to be necessary if a genuine person could be built from subpersonal/nonpersonal parts.
    Another such metabasis would be the leap from quantity to quality. Ramp up the quantitative complexity of a system and then . . . qualia emerge. I have the sense that talk of emergence is just talk: a word is used to paper over a difficulty or name a problem rather than solve it.

  7. Malcolm Pollack Avatar

    Hi Elliott,
    I’ve just read your article, and will second Bill’s remarks about it: very helpful, clarifying, and thought-provoking.
    In the “Legal Ramifications” section, you suggest that “perhaps we should either not create such androids unless we can obtain practical assurance that they are not persons, or if we create them under epistemic conditions of uncertainty, we should grant them legal personhood, which would give them legal rights and responsibilities, protecting them from exploitation and holding them legally responsible for wrongdoing.” I think not creating them is a non-starter — I don’t believe we are going to be able to put this toothpaste back in the tube — and so your argument that under epistemic uncertainty we should err on the side of assuming their personhood becomes morally forceful (despite the strong abductive argument you make earlier, which should incline us to doubt that they are anything more than fancy machines).
    This immediately raises the practical question of just what the downsides are for wrongly ascribing moral and legal rights to things that aren’t persons at all; especially to agents that can act unpredictably and forcefully (and increasingly so, as AI improves).
    “Conceptual disruption” — a term I hadn’t heard before — perfectly describes what’s going to happen when this question, and the many others your essay raises, need suddenly to be addressed and resolved by government, the legal system, and society as a whole. And it is all going to be upon us far sooner than most of us realize.

  8. Elliott Avatar
    Elliott

    Bill, thank you. I’m glad you found the article helpful. It raises some questions, though doesn’t settle any. But the abductive case does seem to support the conclusion that AI-androids are not persons but only zombie-machines that function as if persons.
    What strikes me as a difficult practical problem is what to do regarding the legal status of AI-androids. It seems there are reasons to give them legal rights and reasons not to do so. I hope our judges have the requisite philosophical background to handle the topic. The problem is just around the corner. I watched a video a few months ago in which Musk predicts there will be more androids than cars in the near future. The tech industry talks of buying and selling these androids, yet I haven’t heard any concerns from the techies about the possibility that what they want to buy and sell might be persons. That lack of moral awareness worries me.
    Your two metabasis problems are very difficult. How could a genuine person jump out from an aggregate of parts each of which is subpersonal/nonpersonal? How to get something personal from nothing personal?
    And how can quality leap out from mere quantity? I agree with you that talk of emergence seems to name the problem rather than solve it. Talk of emergence dons the robes of a solution but might instead bury the question underneath.

  9. Elliott Avatar
    Elliott

    Malcolm, thanks for reading the article. I’m glad it was helpful. I agree with you that we probably can’t put the toothpaste back in the tube. Moreover, whatever toothpaste hasn’t been squeezed out yet will be squeezed as quickly as possible. I’m concerned that the squeezers will be techies who — though competent in matters of Techne, are short on Sophia, driven by the monetary bottom line or by geopolitical pressure or by reputation in the tech industry — but who lack a grasp of relevant philosophical matters, and who might dismiss them as impractical even if grasped. And what about the politicians who also want to squeeze the tube? Do they understand what they are doing?
    You raise an important question about the downsides of wrongly ascribing moral and legal rights to things that aren’t persons. Two problems come to mind. First, the favoring of the droid over the human. Suppose we grant legal rights to AI-droids which aren’t persons at all. These rights get absorbed into our institutions and into the thinking habits of the insufficiently careful thinkers who run those institutions, which then act on ‘droid rights’ in ways that are disadvantageous to human beings who actually possess moral rights. Suppose a situation in which a court of law settles a case in favor of an android and against a human person in a way that violates the human’s rights and favors the ‘rights’ of the droid, or a situation in which a benighted university admin creates a ‘droid rights department’ at a university, staffs the department with dozens of high paying jobs where people do things like create a ‘droid celebration month,’ organize ‘droid pride’ marches, and run mandatory ‘droid respect workshops,’ thus taking resources away from human students and the human instructors who teach them. All the while, the droids aren’t actually persons.
    Second, the idea that we wrongly ascribe rights to things that aren’t persons suggests to me a kind of absurdity that we’d be forced to live with. This is the kind of practical absurdity that Nagel discusses in The Absurd. He writes: “In ordinary life a situation is absurd when it includes a conspicuous discrepancy between pretension or aspiration and reality.” If the droids are not persons, and we have reason to hold that they aren’t, we’d be living according to the pretension (or aspiration) that droids are persons, and though the pretension doesn’t align with reality, we’d be stuck with it, because once an idea gets ingrained into our institutions, even if it’s a false idea, it can be very difficult to remove.
    (See top of second page of Nagle’s paper here: https://people.tamu.edu/~sdaniel/NagelAbsurdityofLife.pdf )

  10. Malcolm Pollack Avatar

    Elliott,

    “…and though the pretension doesn’t align with reality, we’d be stuck with it, because once an idea gets ingrained into our institutions, even if it’s a false idea, it can be very difficult to remove.”

    Wait — could something like that actually happen?
    Just kidding, of course. I’ve actually come to see what you’ve just described as not only possible, but probably inevitable, as I’ve yet to come up with any idea of how it might possibly be prevented.

  11. Joe Odegaard Avatar

    Marjorie Taylor Greene takes on the (in my opinion demented or worse) Grok AI chabot:
    https://www.breitbart.com/faith/2025/05/24/the-judgement-seat-belongs-god-not-you-mtg-fires-back-when-left-leaning-chatbot-grok-questions-her-faith/
    I think that the real danger of IA resides in the fools who believe it.
    More later
    Catacomb Joe

  12. Tom T. Avatar
    Tom T.

    I think the problem boils down to the question of whether human conscious experience is or can be reduced to the human rational capacity. For me, the former is much wider in scope than the latter. AI is purely rational, algorithmic logic, no matter how much it can express the courtesies of human interactions, and it can never be more than that. To posit AI as conscious and worthy of legal rights is to reduce consciousness to the cognitive, concept formulation capacity of human beings.
    There are many philosophies that restrict or limit human reasoning to its proper role and leave open the wider context of conscious experience, the Romantics, Kant, Kierkegaard, and many others. To the extent you go with these sorts of efforts (as I do), then so too is the question settled as to whether AI computational, algorithmic skills at cognitive reasoning make it conscious. It is not and never can be. Thinking, even thinking well and clearly, is not the sum total of the human.

  13. Joe Odegaard Avatar

    In order for all of us to throw off illusions, it is essential to remind ourselves that ALL a digital computer is really doing, is adding and subtracting ones and zeros in the physical world. That is the sum total of it.
    Layered upon the incessant noise of the ones and zeros, in a way that is too smart by half, are words and images.
    And then at the level of these words and images, the AI programs suck up, in a vast, uncritical, universal plagiarism, the sum total of human folly, as if wisdom could come from that.
    As an architect and inventor, I want to tell you that ideas come from outside of time and space, and they come in silence, and in a flash. They do not come from something I have made, and they do not come from adding and subtracting ones and zeros.
    The hubris of thinking that a computer, and a program, which you have yourself made, can be smart or conscious, is exactly the same idolatry as worshipping the golden calf in the desert. Hidden in this conceit is the attempt to worship the human self as equal to God: “I have made life, thus I am equal to the Creator of all life; if I worship my creation, the worship comes around full circle and lands on me.” This is exactly the pride that got Lucibello kicked out of heaven.
    I want to scream like an old testament prophet against this modern, yet ancient evil.

Leave a Reply to Grant Castillou Cancel reply

Your email address will not be published. Required fields are marked *