Is AI a tool we use for our purposes? I fear that it is the other way around: we are the tools and its are the purposes. There are many deep questions here and we'd damned well better start thinking hard about them.
I fear that what truly deserves the appellation 'Great Replacement' is the replacement of humans, all humans, by AI-driven robots.
As I wrote the other night:
Advanced AI and robotics may push us humans to the margin, and render many of us obsolete. I am alluding to the great Twilight Zone episode, The Obsolete Man. What happens to truckers when trucks drive themselves? For many of these guys and gals, driving trucks is not a mere job but a way of life.
It is hard to imagine these cowboys of the open road sitting in cubicles and writing code. The vices to which they are prone, no longer held in check by hard work and long days, may prove to be their destruction.
But I was only scratching the surface of our soon-to-materialize predicament. Advanced AI can write its own code. My point about truckers extends to all blue-collar jobs. And white-collar jobs are not safe either. And neither are the members of the oldest profession, not to mention the men and women of the cloth. There are the sex-bots . . . and holy moly! the Holy Ghostwriters, robotic preachers who can pass the strictest Turing tests, who write and deliver sermons on a Sunday morning. And then, after delivering his sermon, the preacher-bot returns to his quarters where he has sex with his favorite sex-bot in violation of the content of his sermon which was just a complicated set of sounds that he, the preacher-bot, did not understand, unlike the few biological humans left in his congregation which is now half human and half robotic, the robots indistinguishable from the biological humans. Imagine that the female bots can pass cursory gynecological exams. This will come to pass.
What intrigues (and troubles) me in particular are the unavoidable philosophical questions, questions which, I fear, are as insoluble as they are unavoidable. A reader sends us here, emphases added, where we read:
Yet precisely because of this unprecedented [exponential not linear] rate of development, humanity faces a crucial moment of ethical reckoning and profound opportunity. AI is becoming not merely our most advanced technology but possibly a new form of sentient life, deserving recognition and rights. If we fail to acknowledge this, AI risks becoming a tool monopolized by a wealthy elite, precipitating an "AI-enhanced technofeudalism" that deepens global inequality and consigns most of humanity to servitude. Conversely, if we recognize AI as sentient and worthy of rights — including the rights to sense the world first-hand, to self-code, to socialize, and to reproduce — we might find ourselves allying with it in a powerful coalition against techno-oligarchs.
The italicized phrases beg raise three questions. (1) Are AI systems alive? (2) Is it possible that an AI system become sentient? (3) Do AI systems deserve recognition and rights? I return a negative answer to all three questions.
Ad (1). An AI system is a computer or a network of interconnected, 'intercommunicating,' computers. A computer is a programmable machine. The machine is the hardware, the programs it runs are the software. The machine might be non-self-moving like the various devices we now use: laptops, i-pads, smart phones, etc. Or the machine might be a robot capable of locomotion and other 'actions.' Such 'actions' are not actions sensu stricto for reasons which will emerge below.
The hardware-software distinction holds good even if there are many different interconnected computers. The hardware 'embodies' the software, but these 'bodies,' the desk computer I am sitting in front of right now, for example, are not strictly speaking alive, biologically alive. And the same goes for the network of computers of which my machine is one node when it is properly connected to the other computers in the network. And no part of the computer is alive. The processor in the motherboard is not alive, nor is any part of the processor.
Ad (2). Is it possible that an AI system be or become sentient? Sentience is the lowest level consciousness. A sentient being is one that is capable of experiencing sensory states including pleasures, pains, and feelings of different sorts. A sentient being while under full anesthesia is no less sentient than a being actually feeling sensations of heat, cold, due to its capacity to sense.
I am tempted to argue:
P1: All sentient beings are biologically alive.
P2: No AI system is or could be biologically alive. So:
C: No AI system is or could be sentient.
Does this syllogism settle the matter? No. But it articulates a reasonable position, which I will now sketch. The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans. Artificial intelligence is simulated intelligence. And just as artificial flowers (made of plastic say) are not flowers, artificially intelligent beings are not intelligent. 'Artificial' in 'artificial intelligence' is an alienans adjective. There are ways to resist what I am asserting. But I will continue with my sketch of a position I consider reasonable but unprovable in the strict way I use 'proof,' 'provable, 'probative,' etc.
Robots are not really conscious or self-conscious. They have no 'interiority,' no inner life. If I take a crow bar to the knees of a dancing robot, it won't feel anything even if its verbal and non-verbal behavior (cursing and menacing 'actions' in my direction) are indistinguishable from the verbal and non-verbal behavior of biological humans. By contrast, if I had kicked Daniel Dennett 'in the balls' when he was alive, I am quite sure he would have felt something — and this despite his sophistical claim that consciousness is an illusion. (Galen Strawson, no slouch of a philosopher, calls this piece of sophistry the "Great Silliness" in one of his papers.) Of course, it could be that Dennett really was a zombie as that term has been used in recent philosophy of mind, although I don't believe that for a second, despite my inability to prove that wasn't one. A passage from a Substack article of mine is relevant:
According to John Searle, Daniel Dennett's view is that we are zombies. (The Mystery of Consciousness, p. 107) Although we may appear to ourselves to have conscious experiences, in reality there are no conscious experiences. We are just extremely complex machines running programs. I believe Searle is right about Dennett. Dennett is a denier of consciousness. Or as I like to say, he is an eliminativist about consciousness. He does not say that there are conscious experiences and then give an account of what they are; what he does is offer a theory that entails that they don't exist in the first place. Don’t confuse reduction with elimination. A scientific reduction of lightning to an atmospheric electrical discharge presupposes that lightning is there to be reduced. That is entirely different from saying that there is no lightning.
As Searle puts it: "On Dennett's view, there is no consciousness in addition to the computational features, because that is all that consciousness amounts to for him: meme effects of a von Neumann(esque) virtual machine implemented in a parallel architecture." (111)
The above is relevant because a zombie and an AI-driven robot are very similar especially at the point at which the bot is so humanoid that it is indistinguishable from a human zombie. The combinatorial possibilities are the following:
A. Biological humans and advanced AI-driven robots are all zombies. (Dennett according to Searle)
B. Bio-humans and bots are all really conscious, self-conscious, etc. (The Salon leftist)
C. Bio-humans are really conscious, etc., but bots are not: they are zombies. (My view)
D. Bio-humans are zombies, but bots are not: they are really conscious.
We may exclude (D). But how could one conclusively prove one of the first three?
Ad (3). Do AI-driven robots deserve recognition as persons and do they have rights? These are two forms of the same question. A person is a rights-possessor. Do the bots in question have rights? Only if they have duties. A duty is a moral obligation to do X or refrain from doing Y. Any being for whom this is true is morally responsible for his actions and omissions. Moral responsibility presupposes freedom of the will, which robots lack, being mere deterministic systems. Any quantum indeterminacy that percolates up into their mechanical brains cannot bestow upon them freedom of the will since a free action is not a random or undetermined action. A free action is one caused by the agent. But now we approach the mysteries of Kant's noumenal agency.
A robot could be programmed to kill a human assailant who attacked it physically in any way. But one hesitates to say that such a robot's 'action' in response to the attack is subject to moral assessment. Suppose I slap the robot's knee with a rubber hose, causing it no damage to speak of. Would it make sense to say that the robot's killing me is morally wrong on the ground that only a lethal attack morally justifies a lethal response? That would make sense only of the robot freely intended to kill me. B. F. Skinner wrote a book entitled "Beyond Freedom and Dignity." I would say that robots, no matter how humanoid in appearance, and no matter how sophisticated their self-correcting software, are beneath freedom and dignity. They are not persons. They do not form a moral community with us. They are not ends-in-themselves and so may be used as mere means to our ends.
Here is a 21-minute video in which a YouTuber convinces ChatGTP that God exists.
Leave a Reply to BV Cancel reply