Why AI Systems Cannot be Conscious

1) To be able to maintain that AI systems are literally conscious in the way we are, conscious states must be multiply realizable. Consider a cognitive state such as knowing that 7 is a prime number. That state is realizable in the wetware of human brains. The question is whether the same type of state could be realized in the hardware of a computing machine. Keep in mind the type-token distinction. The realization of the state in question (knowing that 7 is prime) is its tokening in brain matter in the one instance, in silica-based matter in the other. This is not possible without multiple realizability of one and the same type of mental state.

2) Conscious states (mental states) are multiply realizable only if functionalism is true. This is obvious, is it not?

3) Functionalism is incoherent.

Therefore:

4) AI systems cannot be literally conscious in the way we are.

That's the argument.  The premise that needs defending is (3).  So let's get to it.

Suppose Socrates Jones is in some such state as that of perceiving a tree. The state is classifiable as mental as opposed to a physical state like that of his lying beneath a tree. What makes a mental state mental? That is the question.

The functionalist answer is that what makes a mental state mental is just the causal role it plays in mediating between the sensory inputs, behavioral outputs, and other internal states of the subject in question. The idea is not the banality that mental states typically (or even always) have causes and effects, but that it is causal role occupancy, nothing more and nothing less, that constitutes the mentality of a mental state. The intrinsic nature of what plays the role is relevant only to its fitness for instantiating mental causal  roles, but not at all relevant to its being a mental state.

Consider a piston in an engine. You can't make a piston out of chewing gum, but being made of steel is no part of what makes a piston a piston. A piston is what it does within the 'economy' of the engine. Similarly, on functionalism, a mental state is what it does. This allows, but does not entail, that a mental state be a brain or CNS state. It also allows, but does not entail, that a mental state be a state of a  computing machine.

To illustrate, suppose my cat Zeno and I are startled out of our respective reveries by a loud noise at time t. Given the differences  between human and feline brains, presumably man and cat are not in type-identical brain states at t.  (One of the motivations for functionalism was the breakdown of the old type-type identity theory of Herbert Feigl, U. T. Place. J. J. C. Smart, et al.)  Yet both man and cat are startled: both are in some sense in the same mental state, even though the states they are in are neither token- nor type-identical. The  functionalist will hold that we are in functionally the same mental state in virtue of the fact that Zeno's brain state plays the same  role in him as my brain state plays in me. It does the same  mediatorial job vis-à-vis sensory inputs, other internal states, and  behavioral outputs in me as the cat's brain state does in him.

On functionalism, then, the mentality of the mental is wholly relational. And as David Armstrong points out, "If the essence of the mental is purely relational, purely a matter of what causal role is played, then the logical possibility remains that whatever in fact plays the causal role is not material." This implies that "Mental states might be states of a spiritual substance." Thus the very feature of functionalism that allows mentality to be realized in computers and nonhuman brains generally, also allows it to be realized in spiritual substances if there are any.

Whether this latitudinarianism is thought to be good or bad, functionalism is a monumentally implausible theory of mind. There are the technical objections that have spawned a pelagic literature: absent qualia, inverted qualia, the 'Chinese nation,' etc. Thrusting these aside, I go for the throat, Searle-style. 

Functionalism is threatened by a fundamental incoherence. The theory states that what makes a state mental is nothing intrinsic to the state, but purely relational: a matter of its causes and effects. In us, these happen to be neural. (I am assuming physicalism for the time being.)  Now every mental state is a neural state, but not every neural state is a mental state. So the distinction between mental and nonmental neural states must be accounted for in terms of a distinction between two different sets of causes and effects, those that contribute to mentality and those that do not. But how make this distinction? How do the causes/effects of mental neural events differ from the causes/effects of nonmental neural events? Equivalently, how do psychologically salient input/output events differ from those that lack such salience?

Suppose the display on my monitor is too bright for comfort and I decide to do something about it. Why is it that photons entering my retina are psychologically salient inputs but those striking the back of my head are not? Why is it that the moving of my hand to to adjust the brightness and contrast controls is a salient output event, while unnoticed perspiration is not?

One may be tempted to say that the psychologically salient inputs are those that contribute to the production of the uncomfortable glare sensation, and the psychologically salient outputs are those that manifest the concomitant intention to make an adjustment. But then the salient input/output events are being picked out by reference to mental events taken precisely NOT as causal role occupants, but as exhibiting intrinsic features that are neither causal nor neural: the glare quale has an intrinsic nature that cannot be resolved into relations to other items, and cannot be identified with any brain state. The functionalist would then be invoking the very thing he is at pains to deny, namely, mental events as having more than neural and causal features.

Clearly, one moves in a circle of embarrassingly short diameter if one says: (i) mental events are mental because of the mental causal roles they play; and (ii) mental causal roles are those whose occupants are mental events.

The failure of functionalism is particularly evident in the case of qualia.  Examples of qualia: felt pain, a twinge of nostalgia, the smell of burnt garlic, the taste of avocado.  Is it plausible to say that such qualia can be exhaustively factored into a neural component and a causal/functional component?  It is the exact opposite of plausible.  It is not as loony as the eliminativist denial of qualia, but it is close.  The intrinsic nature of qualitative mental states is essential to them. It is that intrinsic qualitative nature that dooms functionalism.

Therefore

4) It cannot be maintained with truth that AI systems are literally conscious in the way we are. Talk of computers knowing this or that is metaphorical.

Papyrology and AI: the Library at Herculaneum

How much of a curse and how much of a blessing  Artificial Intelligence will prove to be remains to be seen. Book this on the blessing side of the ledger:

The University of Kentucky (UKY) has announced it is a co-recipient of a $13.5 million (€11.5 million) grant from the European Research Council in support of an international effort to decipher ancient papyrus scrolls carbonized and buried by the eruption of Mount Vesuvius in 79 CE.

[. . .]

The award will enable the consortium to scale its efforts over the next six years to digitally recover, analyze, and read text from hundreds of papyrus scrolls that have until now been undecipherable. In addition, the project will leverage AI to connect pieces of scrolls, find patterns in how the library at Herculaneum was organized, and help establish best practices to preserve the collection. The project may uncover as much as 4.5 million words of entirely new Greek and Latin literature.

 

AI and Demons

You may remember our 'demonic' discussion from last summer. See  Reading Now: Demonic Foes. The comment thread runs to 61 entries, some of them excellent.

Bro Joe now wants us to read: Satanic AI: ChatGPT gives instructions.

Another topic we ought to explore is the possibility of demonic possession of AI systems. 

According to Richard Gallagher, M.D., "The essence of a possession is the actual control of the body (never the 'soul' or will') of a person by one or more evil spirits." (Demonic Foes, p. 80). Now AI systems do not have souls or wills of their own (or so I argue), but they do have bodies, albeit inorganic.  Might they then host demons?

Gallagher's book is outstanding. So if you think demonology is buncombe, you should study his book and disembarrass yourself of your illusions. 

What’s to Stop an AI System from having a Spiritual Soul?

John Doran in a comment presents an argument worth bringing to the top of the pile:

A) Anything conscious has a non-material basis for such consciousness.

B) Certain AI constructs [systems] are conscious.

Therefore:

C) Such AI constructs [systems] have a non-material component in which their consciousness resides.

Why doesn't that work? It's obviously valid.

In short, and in the philosophical colloquial, when a man and woman successfully combine their mobile and sessile gametes, a human person is brought into existence, complete with a soul.

So why can we not bring an ensouled being into existence as a result of the manipulation of silicon, plastic, metal, coding, and the application of electricity?

A provocative question.  But before he asked the question, he gave an argument. The argument is plainly valid. But all that means is that the conclusion follows from the premises. A valid argument is one such that if all the premises are true, then it cannot be the case that the conclusion is false. But are both premises true? I am strongly inclined to accept (A), but I reject (B).  The various arguments from the unity of consciousness we have been discussing convince me that no material system can be conscious. How does John know that (B) is true? Does he have an argument for (B)? Can he refute the arguments from the unity of consciousness?

Now to his question.

John appears to be suggesting an emergentist view according to which, at a certain high level of material complexity an "ensouled being" (his phrase) emerges or comes into existence from the material system.  His view, I take it, is that souls are emergent entities that can arise from very different types of material systems. In the wet and messy human biological system, a mobile gamete (a spermatazoon) mates with a sessile gamete, an ovum, to produce a conceptus such that at the moment of conception a spiritual soul comes into existence.  In a non-living silicon-based hunk of dry computer hardware running appropriately complex software, spiritual souls can also come into existence. Why not?

Emergence is either supernatural or natural.

Supernatural emergence is either Platonic or Christian. On the former, God causes pre-existent souls to take up residence in human bodies at the moment of biological conception.  On the latter, God creates human souls ex nihilo at the moment of conception.  Thus on the latter the coming to be of a human being is a joint task: the conjugal act of the parents supplies the material body and God supplies the spiritual soul.

Natural emergence involves no divine agency. Souls emerge by natural necessity at a certain level of material complexity, whether biological or computational. Edward Feser, in his discussion of William Hasker's emergent dualism, mentions a dilemma pointed out by  Brian Leftow.  (Immortal Souls, 2024, 517.) I'll put it in my own way. Souls either emerge from matter or they do not.  If they emerge, then they could only be material, which contradicts the assumption that they are necessarily immaterial.  If they do not emerge,  then they could be immaterial, but could not be emergent.  

The natural emergence from matter of an immaterial individual (substance) is metaphysically impossible.  The very notion is incoherent.  It follows that immortal souls cannot naturally emerge either biologically or computationally. The only way they could emerge is supernaturally.

There is a second consideration that casts doubt on naturally emergent dualism.  Does a spiritual soul, once it emerges, continue to exist on its own even after the material emergence base ceases to exist? In other words, are souls emergent entities that become ontologically independent after their emergence, or do they remain dependent upon the matrix, whether biological or silicon-based, from which they emerged? 

I'm inclined to say that 'naturally emergent dualism of individual substances' is a misbegotten notion.  Property emergence is a different story. I take no position on that. Leastways, not at the moment.

AI, Intellectual Theft, and Lawsuits

A year or two ago I was bumping along at about one thousand page views per diem when I experienced an unusual uptick in traffic. Inspection of the MavPhil traffic log suggested that my content was being stolen. But I didn't much care, and I still don't much care inasmuch as my content has very little commercial value, and in any case, I'm a "made man" with more than enough loot to see me through my remaining sublunary travels and travails. My thinking and writing is a labor love and not a money-making enterprise. Add to that the fact that I'm an Enough is Enough kind of guy who has no interest in piling up the lean green far in excess of what is needed.  And maybe I'm steering Group Mind or Objektiver Geist in a wholesome direction. I'm doing my bit, like a good Boomer, to make this world a better place. 

But what if you make your living by scribbling? What if you have a 'high maintenance' wife, children, a hefty mortgage and you live in a high-tax lefty locale? Interesting questions here.  More grist for the mill.

And so I tip my  hat to Ingvarius Maximus the Alhambran for sending us to  this Washington (Com)Post article actually worth reading. Access is free. (What fool pays for access to such a crappy publication?)

One more thing. When lawyers are replaced by AI systems will AI systems be suing AI systems over intellectual property theft? 

AI and the Unity of Consciousness

Top AI researchers such as Geoffrey Hinton, the "Godfather of AI,"  hold that advanced AI systems are conscious.  That is far from obvious, and may even be demonstrably false if we consider the phenomenon of the unity of consciousness.  I will first explain the phenomenon in question, and then conclude that AI systems cannot accommodate it.

Diachronic Unity of Consciousness, Example One

Suppose my mental state passes from one that is pleasurable to one that is painful.  Observing a beautiful Arizona sunset, my reverie is suddenly broken by the piercing noise of a smoke detector.  Not only is the painful state painful, the transition from the pleasurable state to the painful one is itself painful.  The fact that the transition is painful shows that it is directly perceived. It is not as if there is merely a succession of consciousnesses (conscious states), one pleasurable the other painful; there is in addition a consciousness of their succession.  For there is a consciousness of the transition from the pleasant state to the painful state, a consciousness that embraces both of the states, and so cannot be reductively analyzed into them.  But a consciousness of their succession is a consciousness of their succession in one subject, in one unity of consciousness.  It is a consciousness of the numerical identity of the self through the transition from the pleasurable state to the painful one.  Passing from a pleasurable state to a painful one, there is not only an awareness of a pleasant state followed by an awareness of a painful one, but also an awareness that the one who was in a pleasurable state is strictly and numerically the same as the one who is now in a painful state.  This sameness is phenomenologically given, although our access to this phenomenon is easily blocked by inappropriate models taken from the physical world.  Without the consciousness of sameness, there would be no consciousness of transition.

What this phenomenological argument shows is that the self cannot be a mere diachronic bundle or collection of states.  The self is a transtemporal unity distinct from its states whether these states are taken distributively (one by one) or collectively (all together).

May we conclude from the phenomenology of the situation that there is a simple, immaterial, meta-physical substance that each one of us is and that is the ontological support of the phenomenologically given unity of consciousness?  May we make the old-time school-metaphysical moves from the simplicity of this soul substance to it immortality? Maybe not! This is a further step that needs to be carefully considered. I don't rule it out, but I also don't rule it in. I don't need to take the further step for my present purpose, which is merely to show that a computing machine, no matter how complex or how fast its processing, cannot be conscious.  No material system can be conscious.  For the moment I content myself with the negative claim: no material system can be conscious. It follows straightaway that no AI system can be conscious.

Diachronic Unity of Consciousness, Example Two

Another example is provided by the hearing of a melody.  To hear the melody Do-Re-Mi, it does not suffice that there be a hearing of Do, followed by a hearing of Re, followed by a hearing of Mi.  For those three acts of hearing could occur in that sequence in three distinct subjects, in which case they would not add up to the hearing of a melody.  (Tom, Dick, and Harry can divide up the task of loading a truck, but not the ‘task’ of hearing a melody, or that of understanding a sentence.)  But now suppose the acts of hearing occur in the same subject, but that this subject is not a unitary and self-same individual but just the bundle of these three acts, call them A1, A2, and A3.  When A1 ceases, A2 begins, and when A2 ceases, A3 begins: they do not overlap.  In which act is the hearing of the melody?  A3 is the only likely candidate, but surely it cannot be a hearing of the melody.

This is because the awareness of a melody involves the awareness of the (musical not temporal)  intervals between the notes, and to apprehend these intervals there must be a retention (to use Husserl’s term) in the present act A3 of the past acts A2 and A1.  Without this phenomenological presence of the past acts in the present act, there would be no awareness in the present of the melody.  This implies that the self cannot be a mere bundle of perceptions externally related to each other, but must be a peculiarly intimate unity of perceptions in which the present perception A3 includes the immediately past ones A2 and A1 as temporally past but also as phenomenologically present in the mode of retention.  The fact that we hear melodies thus shows that there must be a self-same and unitary self through the period of time between the onset of the melody and its completion.  This unitary self is neither identical to the sum or collection of A1, A2, and A3, nor is it identical to something wholly distinct from them.  Nor of course is it identical to any one of them or any two of them.  This unitary self is co-given whenever one hears a melody.  (This seems to imply that all consciousness is at least implicitly self-consciousness. This is a topic for a later post.)

Diachronic -Synchronic Unity of Consciousness

Now consider a more complicated example in which I hear two chords, one after the other, the first major, the second minor.   I hear the major chord C-E-G, and then I hear the minor chord C-E flat-G.  But I also hear the difference between them.   How is the awareness of the major-minor difference possible? One condition of this possibility is the diachronic unity of consciousness. But there is also a second condition. The hearing of the major chord as major cannot be analyzed without remainder into an act of hearing C, an act of hearing E, and an act of hearing G, even when all occur simultaneously.  For to hear the three notes as a major chord, I must apprehend the 1-3-5 musical interval that they instantiate.  But this is possible only because the whole of my present consciousness is more than the sum of its parts.  This whole is no doubt made up of the part-consciousnesses, but it is not exhausted by them.  For it is also a consciousness of the relatedness of the notes.  But this consciousness of relatedness is not something in addition to the other acts of consciousness: it includes them and embraces them without being reducible to them.  So here we have an example of the diachronic-synchronic unity of consciousness.

These considerations appear to put paid to the conceit that AI systems can be conscious.

Or have I gone too far? You've heard me say that in philosophy there are few if any rationally compelling,  ineluctably decisive, arguments for substantive theses.  Are the above arguments among the few? Further questions obtrude themselves, for example, "What do you mean by 'material system'?"  "Could a panpsychist uphold the consciousness of advanced AI systems?"

Vita brevis, philosophia longa.

Can an AI System Meditate?

Resolute meditators on occasion experience a deep inner quiet. It is a definite state of consciousness. You will know it if you experience it, but destroy it if you try to analyze it.  If you have the good fortune to be vouchsafed such a state of awareness you must humbly accept it and not reflect upon it nor ask questions about it, such as: How did I arrive at this blissful state of mind? How can I repeat this experience?  You must simply rest in the experience. Become as a little child and accept the gift with gratitude. One-pointedness is destroyed by analysis. 

Mental quiet is a state in which the "mind works" have temporarily shut down in the sense that discursive operations (conceptualizing, judging, reasoning) have ceased, and there is no inner processing of data or computation.  You have achieved a deep level of conscious unity prior to and deeper than anything pieced together from parts. You are not asleep or dead but more fully alive. You are approaching the source of thoughts, which is not and cannot be a thought.  Crude analogy: the source of a stream is not itself a stream.  Less crude, but still an analogy: the unity of a proposition is not itself a proposition, or the proposition of which it is the unity, or a sub-proposititional constituent of the proposition.

Can a computing machine achieve the blissful state of inner quiet? You can 'pull the plug' on it in which case it would 'go dark.'  The machine is either on or off (if it is 'asleep' it is still on).   But when the meditator touches upon inner quiet, he has not gone dark, but entered a light transcendentally prior to the objects of ordinary (discursive) mind.

I would replace the lyric, "Turn off your mind, relax, and float downstream; it is not dying, it is not dying" with "Turn off your discursive mind and swim upstream; it is not dying; it is not dying." "That you may see the meaning of Within."

Can an AI system achieve mental quiet, the first step on the mystical ascent? Cognate questions: Could such a system realize the identity of Atman and Brahman or enjoy the ultimate felicity of the Beatific Vision?  Is ultimate enlightenment reachable by an increase is processing speed? You are aware, aren't you, that processing speed is increasing exponentially

The answer to these questions, of course, is No.  When a computer stops computing it ceases to function as it must function to be what it is.  But when we halt our discursive operations, however, we touch upon our true selves.

Is A.I. Killing the World Wide Web?

From The Economist:

As AI changes how people browse, it is altering the economic bargain at the heart of the internet. Human traffic has long been monetised using online advertising; now that traffic is drying up. Content producers are urgently trying to find new ways to make AI companies pay them for information. If they cannot, the open web may evolve into something very different.

[. . .]

“The nature of the internet has completely changed,” says Prashanth Chandrasekar, chief executive of Stack Overflow, best known as an online forum for coders. “AI is basically choking off traffic to most content sites,” he says. With fewer visitors, Stack Overflow is seeing fewer questions posted on its message boards. Wikipedia, also powered by enthusiasts, warns that AI-generated summaries without attribution “block pathways for people to access…and contribute to” the site.

This won't affect me. My writing is a labor of love. I don't try to make money from it. I don't need to. I've made mine. You could call me a "made man." I may, however, monetize my Substack. It seems churlish to refuse the pledges that readers have kindly made.

Intelligence, Cognition, Hallucination, and AI: Notes on Susskind

Herewith, a first batch of notes on Richard Susskind, How to Think About AI: A Guide for the Perplexed (Oxford 2025). I thank the multi-talented Brian Bosse for steering me toward this excellent book. Being a terminological stickler, I thought I'd begin this series of posts with some linguistic and conceptual questions.  We need to define terms, make distinctions, and identify fallacies.  I use double quotation marks to quote, and single to mention, sneer, and indicate semantic extensions. Material within brackets is my interpolation. I begin with a fallacy that I myself have fallen victim to. 

The AI Fallacy: "the mistaken assumption that the only way to get machines to perform at the level of the best humans is somehow to replicate the way humans work." (54) "The error is failing to recognize that AI systems do not [or need not] mimic or replicate human reasoning."  The preceding sentence is true, but only if the bracketed material is added.

Intellectual honesty demands that I tax myself with having committed the AI Fallacy. I wrote:

The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans.  Artificial intelligence is simulated intelligence.

This is true of first-generation systems only.  These systems "required human 'knowledge engineers' to mine the jewels from the heads of 'domain experts' and convert their knowledge into decision trees" . . . whereas "second-generation AI systems" mine jewels "from vast oceans of data" and "directly detect patterns, trends, and relationships in these oceans of data." (17-18, italics added)  These Gen-2 systems 'learn' from all this data "without needing to be explicitly programmed." (18)  This is called 'machine learning' because the machine itself is 'learning.' Note the 'raised eyebrows' which raise the question: Are these systems really learning?

So what I quoted myself as saying was right when I was a student of engineering in the late '60s, early '70s, but it is outdated now. There were actually two things we didn't appreciate back then. One was the impact of the exponential, not linear, increase in the processing power of computers. If you are not familiar with the difference between linear and exponential functions, here is a brief intro.  IBM's Deep Blue in 1997 bested Gary Kasparov,  the quondam world chess champion. Grandmaster Kasparov was beaten by  exponentially fast brute force processing; no human chess player can evaluate 300 million possible moves in one second.

The second factor is even more important for understanding today's AI systems. Back in the day it was thought that practical AI could be delivered by assembling "huge decision trees that captured the apparent lines of reasoning of human experts . . . ." (17) But that was Gen-1 thinking as I have already   explained.

More needs to be said, but I want to move on to three other words tossed around in contemporary AI jargon.

Are AI Systems Intelligent?

Here is what I wrote in May:

The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans.  Artificial intelligence is simulated intelligence. And just as artificial flowers (made of plastic say) are not flowers, artificially intelligent beings are not intelligent. 'Artificial' in 'artificial intelligence' is an alienans adjective

Perhaps you have never heard of such an adjective. 

A very clear example of an alienans adjective is 'decoy' in 'decoy duck.' A decoy duck is not a duck even if it walks likes a duck, talks like a duck, etc., as the often mindlessly quoted old saying goes.   Why not? Because it is a piece of wood painted and tricked out to look like a duck to a duck so as to lure real ducks into the range of the hunters' shotguns.  The real ducks are the ducks that occur in nature. The hunters want to chow down on duck meat, not wood. A decoy duck is not a kind of duck any more than artificial leather is a kind of leather. Leather comes in different kinds: cow hide,  horse hide, etc., but artificial leather such as Naugahyde is not a kind of leather. Same goes for faux marble and false teeth and falsiesFaux (false) marble is not marble. Fool's gold is not gold but pyrite or iron sulfide. And while false teeth might be functionally equivalent to real or natural teeth, they are not real or true teeth. That is why they are called false teeth.

An artificial heart may be the functional equivalent of a healthy biologically human heart, inasmuch as it pumps blood just as well as a biologically human heart, but it is not a biologically human heart. It is artificial because artifactual, man-made, thus not natural.  I am presupposing that there is a deep difference between the natural and the artificial and that homo faber, man the maker, cannot obliterate that distinction by replacing everything natural with something artificial.

I now admit, thanks to Susskind, that the bit about simulation quoted above commits what he calls the AI Fallacy, i.e., "the mistaken assumption that the only way to get machines to perform at the level of the best humans is somehow to replicate the way that humans work." (54) I also admit that said fallacy is a fallacy. The question for me now is whether I should retract my assertion that AI systems, since they are artificially intelligent, are not really intelligent.  Or is it logically consistent to affirm both of the following?

a) It is a mistake to think that we can get the outcomes we want from AI systems only if we can get them to process information in the same way that we humans process information.

and

b) AI systems are not really intelligent.

I think the two propositions are logically consistent, i.e., that they can both be true, and I think that in fact both are true. But in affirming (b) I am contradicting the "Godfather of AI," Geoffrey Hinton.  Yikes! He maintains that AI systems are all of the following: intelligent, more intelligent than us, actually conscious, potentially self-conscious, have experiences, and are the subjects of gen-u-ine volitional states. They have now or will have the ability to set goals and pursue purposes, their own purposes, whether or not they are also our purposes. If so, we might become the tools of our tools! They might have it in for us!

Note that if AI systems are more intelligent than us, then they are intelligent in the same sense in which we are intelligent, but to a greater degree.  Now we are really, naturally, intelligent, or at least some of us are. Thus Hinton is committed to saying that artificial intelligence is identical to real intelligence, as we experience it in ourselves in the first-person way.  He thinks that advanced AI systems  understand, assess, evaluate, judge, just as we do — but they do it better!

Now I deny that AI systems are intelligent, and I deny that they ever will be.  So I stick to my assertion that 'artificial' in 'artificial intelligence' is an alienans adjective.  But to argue my case will require deep inquiry into the nature of intelligence.  That task is on this blogger's agenda.  I suspect that Susskind will agree with my case. (Cf. pp. 59-60)  

Cognitive Computing?

Our natural tendency is to anthropomorphize computing machines. This is at the root of the AI Fallacy, as Susskind points out. (58)  But here I want to make a distinction between anthropocentrism and anthropomorphic projection. At the root of the AI Fallacy — the mistake of "thinking that AI systems have to copy the way humans work to achieve high-level performance" (58) — is anthropocentrism. This is what I take Susskind to mean by "anthropomorphize." We view computing machines from our point of view and think that they have to mimic, imitate, simulate what goes on in us for these machines to deliver the high-level outcomes we want.

We engage in anthropomorphic projection when we project into the machines states of mind that we know about in the first-person way, states of mind qualitatively identical to the states of mind that we encounter in ourselves, states of mind that I claim AI systems cannot possess.  The might be what Hinton and the boys are doing. I think that Susskind might well agree with me about this. He says the following about the much bandied-about phrase 'cognitive computing':

It might have felt cutting-edge to use this term, but it was plainly wrong-headed: the systems under this heading had no more cognitive states than a grilled kipper. It was also misleading — hype, essentially — because 'cognitive computing' suggested capabilities that AI systems did not have. (59)

The first sentence in this quotation is bad English. What our man should have written is: "the systems under this heading no more had cognitive states than a grilled kipper." By the way, this grammatic howler illustrates how word order, and thus syntax, can affect semantics.  What Susskind wrote is false since it implies that the kipper had cognitive states. My corrected sentence is true.

Pedantry aside, the point is that computers don't know anything. They are never in cognitive states. So say I, and I think Susskind is inclined to agree. Of course, I will have to argue this out.

Do AI Systems Hallucinate?

More 'slop talk' from  the AI boys, as Susskind clearly appreciates:

The same goes for 'hallucinations', a term which is widely used to refer to the errors and fabrications to which generative AI systems are prone. At best, this is another metaphor, and at worst the word suggests cognitive states that are quite absent. Hallucinations are mistaken perceptions of sensory experiences. This really isn't what's going on when ChatGPT churns out gobbledygook. (59, italics added)

I agree, except for the sentence I set in italics. There is nothing wrong with the grammar of the sentence. But the formulation is philosophically lame. I would put it like this, "An hallucination is an object-directed experience, the object of which  does not exist." For example, the proverbial drunk who visually hallucinates a pink rat is living through an occurrent sensory mental state that is directed upon a nonexistent object.  He cannot be mistaken about his inner perception of his sensory state; what he is mistaken about is the existence in the external world of the intentional object of his sensory state.

There is also the question whether all hallucinations are sensory. I don't think so. Later. It's time for lunch.

Quibbles aside, Susskind's book is excellent, inexpensive, and required reading if you are serious about these weighty questions.

Grok on Brian Leiter

I put the question to the Grok bot: Who first called Brian Leiter 'Ladder Man' and why? Grok shot back: 

The earliest identified use of the nickname "Ladder Man" for Brian Leiter appears in a blog post by Bill Vallicella on Maverick Philosopher dated March 13, 2014. Vallicella used the term to criticize Leiter’s perceived obsession with academic rankings and status, particularly through his creation and management of the Philosophical Gourmet Report (PGR), a controversial ranking of philosophy PhD programs. The nickname plays on the German word "Leiter," which means both "ladder" and "leader" (as in Gauleiter), suggesting Leiter’s focus on climbing or promoting hierarchical structures in academia. Vallicella’s post describes Leiter as a “careerist” and links the nickname to his reputation for prioritizing professional status and rankings.

 
 
No earlier source is identified, so Vallicella likely coined the term. The context was a broader critique of Leiter’s influence in philosophy, including his academic gossip site, Leiter Reports, and his perceived aggressive behavior, which some labeled as "thuggish." The nickname reflects a satirical jab at Leiter’s role in shaping academic hierarchies.
 
Not bad! But truth be told, I did not coin 'Ladder Man.' The honor goes to John J. Ray. See his blog Brian Leiter (Ladderman). I will check back later to see if Grok does an update. My impression thus far is that Grok crawls through substacks more assiduously than it does through blogs.
 
Hegel wrote about objecktiver Geist.  Your humble correspondent is but one subjective spirit making his contribution to the depositum philosophiae subsisting in the aether of cyberspace.