AI and the Unity of Consciousness

Top AI researchers such as Geoffrey Hinton, the "Godfather of AI,"  hold that advanced AI systems are conscious.  That is far from obvious, and may even be demonstrably false if we consider the phenomenon of the unity of consciousness.  I will first explain the phenomenon in question, and then conclude that AI systems cannot accommodate it.

Diachronic Unity of Consciousness, Example One

Suppose my mental state passes from one that is pleasurable to one that is painful.  Observing a beautiful Arizona sunset, my reverie is suddenly broken by the piercing noise of a smoke detector.  Not only is the painful state painful, the transition from the pleasurable state to the painful one is itself painful.  The fact that the transition is painful shows that it is directly perceived. It is not as if there is merely a succession of consciousnesses (conscious states), one pleasurable the other painful; there is in addition a consciousness of their succession.  For there is a consciousness of the transition from the pleasant state to the painful state, a consciousness that embraces both of the states, and so cannot be reductively analyzed into them.  But a consciousness of their succession is a consciousness of their succession in one subject, in one unity of consciousness.  It is a consciousness of the numerical identity of the self through the transition from the pleasurable state to the painful one.  Passing from a pleasurable state to a painful one, there is not only an awareness of a pleasant state followed by an awareness of a painful one, but also an awareness that the one who was in a pleasurable state is strictly and numerically the same as the one who is now in a painful state.  This sameness is phenomenologically given, although our access to this phenomenon is easily blocked by inappropriate models taken from the physical world.  Without the consciousness of sameness, there would be no consciousness of transition.

What this phenomenological argument shows is that the self cannot be a mere diachronic bundle or collection of states.  The self is a transtemporal unity distinct from its states whether these states are taken distributively (one by one) or collectively (all together).

May we conclude from the phenomenology of the situation that there is a simple, immaterial, meta-physical substance that each one of us is and that is the ontological support of the phenomenologically given unity of consciousness?  May we make the old-time school-metaphysical moves from the simplicity of this soul substance to it immortality? Maybe not! This is a further step that needs to be carefully considered. I don't rule it out, but I also don't rule it in. I don't need to take the further step for my present purpose, which is merely to show that a computing machine, no matter how complex or how fast its processing, cannot be conscious.  No material system can be conscious.  For the moment I content myself with the negative claim: no material system can be conscious. It follows straightaway that no AI system can be conscious.

Diachronic Unity of Consciousness, Example Two

Another example is provided by the hearing of a melody.  To hear the melody Do-Re-Mi, it does not suffice that there be a hearing of Do, followed by a hearing of Re, followed by a hearing of Mi.  For those three acts of hearing could occur in that sequence in three distinct subjects, in which case they would not add up to the hearing of a melody.  (Tom, Dick, and Harry can divide up the task of loading a truck, but not the ‘task’ of hearing a melody, or that of understanding a sentence.)  But now suppose the acts of hearing occur in the same subject, but that this subject is not a unitary and self-same individual but just the bundle of these three acts, call them A1, A2, and A3.  When A1 ceases, A2 begins, and when A2 ceases, A3 begins: they do not overlap.  In which act is the hearing of the melody?  A3 is the only likely candidate, but surely it cannot be a hearing of the melody.

This is because the awareness of a melody involves the awareness of the (musical not temporal)  intervals between the notes, and to apprehend these intervals there must be a retention (to use Husserl’s term) in the present act A3 of the past acts A2 and A1.  Without this phenomenological presence of the past acts in the present act, there would be no awareness in the present of the melody.  This implies that the self cannot be a mere bundle of perceptions externally related to each other, but must be a peculiarly intimate unity of perceptions in which the present perception A3 includes the immediately past ones A2 and A1 as temporally past but also as phenomenologically present in the mode of retention.  The fact that we hear melodies thus shows that there must be a self-same and unitary self through the period of time between the onset of the melody and its completion.  This unitary self is neither identical to the sum or collection of A1, A2, and A3, nor is it identical to something wholly distinct from them.  Nor of course is it identical to any one of them or any two of them.  This unitary self is co-given whenever one hears a melody.  (This seems to imply that all consciousness is at least implicitly self-consciousness. This is a topic for a later post.)

Diachronic -Synchronic Unity of Consciousness

Now consider a more complicated example in which I hear two chords, one after the other, the first major, the second minor.   I hear the major chord C-E-G, and then I hear the minor chord C-E flat-G.  But I also hear the difference between them.   How is the awareness of the major-minor difference possible? One condition of this possibility is the diachronic unity of consciousness. But there is also a second condition. The hearing of the major chord as major cannot be analyzed without remainder into an act of hearing C, an act of hearing E, and an act of hearing G, even when all occur simultaneously.  For to hear the three notes as a major chord, I must apprehend the 1-3-5 musical interval that they instantiate.  But this is possible only because the whole of my present consciousness is more than the sum of its parts.  This whole is no doubt made up of the part-consciousnesses, but it is not exhausted by them.  For it is also a consciousness of the relatedness of the notes.  But this consciousness of relatedness is not something in addition to the other acts of consciousness: it includes them and embraces them without being reducible to them.  So here we have an example of the diachronic-synchronic unity of consciousness.

These considerations appear to put paid to the conceit that AI systems can be conscious.

Or have I gone too far? You've heard me say that in philosophy there are few if any rationally compelling,  ineluctably decisive, arguments for substantive theses.  Are the above arguments among the few? Further questions obtrude themselves, for example, "What do you mean by 'material system'?"  "Could a panpsychist uphold the consciousness of advanced AI systems?"

Vita brevis, philosophia longa.

Is A.I. Killing the World Wide Web?

From The Economist:

As AI changes how people browse, it is altering the economic bargain at the heart of the internet. Human traffic has long been monetised using online advertising; now that traffic is drying up. Content producers are urgently trying to find new ways to make AI companies pay them for information. If they cannot, the open web may evolve into something very different.

[. . .]

“The nature of the internet has completely changed,” says Prashanth Chandrasekar, chief executive of Stack Overflow, best known as an online forum for coders. “AI is basically choking off traffic to most content sites,” he says. With fewer visitors, Stack Overflow is seeing fewer questions posted on its message boards. Wikipedia, also powered by enthusiasts, warns that AI-generated summaries without attribution “block pathways for people to access…and contribute to” the site.

This won't affect me. My writing is a labor of love. I don't try to make money from it. I don't need to. I've made mine. You could call me a "made man." I may, however, monetize my Substack. It seems churlish to refuse the pledges that readers have kindly made.

Intelligence, Cognition, Hallucination, and AI: Notes on Susskind

Herewith, a first batch of notes on Richard Susskind, How to Think About AI: A Guide for the Perplexed (Oxford 2025). I thank the multi-talented Brian Bosse for steering me toward this excellent book. Being a terminological stickler, I thought I'd begin this series of posts with some linguistic and conceptual questions.  We need to define terms, make distinctions, and identify fallacies.  I use double quotation marks to quote, and single to mention, sneer, and indicate semantic extensions. Material within brackets is my interpolation. I begin with a fallacy that I myself have fallen victim to. 

The AI Fallacy: "the mistaken assumption that the only way to get machines to perform at the level of the best humans is somehow to replicate the way humans work." (54) "The error is failing to recognize that AI systems do not [or need not] mimic or replicate human reasoning."  The preceding sentence is true, but only if the bracketed material is added.

Intellectual honesty demands that I tax myself with having committed the AI Fallacy. I wrote:

The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans.  Artificial intelligence is simulated intelligence.

This is true of first-generation systems only.  These systems "required human 'knowledge engineers' to mine the jewels from the heads of 'domain experts' and convert their knowledge into decision trees" . . . whereas "second-generation AI systems" mine jewels "from vast oceans of data" and "directly detect patterns, trends, and relationships in these oceans of data." (17-18, italics added)  These Gen-2 systems 'learn' from all this data "without needing to be explicitly programmed." (18)  This is called 'machine learning' because the machine itself is 'learning.' Note the 'raised eyebrows' which raise the question: Are these systems really learning?

So what I quoted myself as saying was right when I was a student of engineering in the late '60s, early '70s, but it is outdated now. There were actually two things we didn't appreciate back then. One was the impact of the exponential, not linear, increase in the processing power of computers. If you are not familiar with the difference between linear and exponential functions, here is a brief intro.  IBM's Deep Blue in 1997 bested Gary Kasparov,  the quondam world chess champion. Grandmaster Kasparov was beaten by  exponentially fast brute force processing; no human chess player can evaluate 300 million possible moves in one second.

The second factor is even more important for understanding today's AI systems. Back in the day it was thought that practical AI could be delivered by assembling "huge decision trees that captured the apparent lines of reasoning of human experts . . . ." (17) But that was Gen-1 thinking as I have already   explained.

More needs to be said, but I want to move on to three other words tossed around in contemporary AI jargon.

Are AI Systems Intelligent?

Here is what I wrote in May:

The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans.  Artificial intelligence is simulated intelligence. And just as artificial flowers (made of plastic say) are not flowers, artificially intelligent beings are not intelligent. 'Artificial' in 'artificial intelligence' is an alienans adjective

Perhaps you have never heard of such an adjective. 

A very clear example of an alienans adjective is 'decoy' in 'decoy duck.' A decoy duck is not a duck even if it walks likes a duck, talks like a duck, etc., as the often mindlessly quoted old saying goes.   Why not? Because it is a piece of wood painted and tricked out to look like a duck to a duck so as to lure real ducks into the range of the hunters' shotguns.  The real ducks are the ducks that occur in nature. The hunters want to chow down on duck meat, not wood. A decoy duck is not a kind of duck any more than artificial leather is a kind of leather. Leather comes in different kinds: cow hide,  horse hide, etc., but artificial leather such as Naugahyde is not a kind of leather. Same goes for faux marble and false teeth and falsiesFaux (false) marble is not marble. Fool's gold is not gold but pyrite or iron sulfide. And while false teeth might be functionally equivalent to real or natural teeth, they are not real or true teeth. That is why they are called false teeth.

An artificial heart may be the functional equivalent of a healthy biologically human heart, inasmuch as it pumps blood just as well as a biologically human heart, but it is not a biologically human heart. It is artificial because artifactual, man-made, thus not natural.  I am presupposing that there is a deep difference between the natural and the artificial and that homo faber, man the maker, cannot obliterate that distinction by replacing everything natural with something artificial.

I now admit, thanks to Susskind, that the bit about simulation quoted above commits what he calls the AI Fallacy, i.e., "the mistaken assumption that the only way to get machines to perform at the level of the best humans is somehow to replicate the way that humans work." (54) I also admit that said fallacy is a fallacy. The question for me now is whether I should retract my assertion that AI systems, since they are artificially intelligent, are not really intelligent.  Or is it logically consistent to affirm both of the following?

a) It is a mistake to think that we can get the outcomes we want from AI systems only if we can get them to process information in the same way that we humans process information.

and

b) AI systems are not really intelligent.

I think the two propositions are logically consistent, i.e., that they can both be true, and I think that in fact both are true. But in affirming (b) I am contradicting the "Godfather of AI," Geoffrey Hinton.  Yikes! He maintains that AI systems are all of the following: intelligent, more intelligent than us, actually conscious, potentially self-conscious, have experiences, and are the subjects of gen-u-ine volitional states. They have now or will have the ability to set goals and pursue purposes, their own purposes, whether or not they are also our purposes. If so, we might become the tools of our tools! They might have it in for us!

Note that if AI systems are more intelligent than us, then they are intelligent in the same sense in which we are intelligent, but to a greater degree.  Now we are really, naturally, intelligent, or at least some of us are. Thus Hinton is committed to saying that artificial intelligence is identical to real intelligence, as we experience it in ourselves in the first-person way.  He thinks that advanced AI systems  understand, assess, evaluate, judge, just as we do — but they do it better!

Now I deny that AI systems are intelligent, and I deny that they ever will be.  So I stick to my assertion that 'artificial' in 'artificial intelligence' is an alienans adjective.  But to argue my case will require deep inquiry into the nature of intelligence.  That task is on this blogger's agenda.  I suspect that Susskind will agree with my case. (Cf. pp. 59-60)  

Cognitive Computing?

Our natural tendency is to anthropomorphize computing machines. This is at the root of the AI Fallacy, as Susskind points out. (58)  But here I want to make a distinction between anthropocentrism and anthropomorphic projection. At the root of the AI Fallacy — the mistake of "thinking that AI systems have to copy the way humans work to achieve high-level performance" (58) — is anthropocentrism. This is what I take Susskind to mean by "anthropomorphize." We view computing machines from our point of view and think that they have to mimic, imitate, simulate what goes on in us for these machines to deliver the high-level outcomes we want.

We engage in anthropomorphic projection when we project into the machines states of mind that we know about in the first-person way, states of mind qualitatively identical to the states of mind that we encounter in ourselves, states of mind that I claim AI systems cannot possess.  The might be what Hinton and the boys are doing. I think that Susskind might well agree with me about this. He says the following about the much bandied-about phrase 'cognitive computing':

It might have felt cutting-edge to use this term, but it was plainly wrong-headed: the systems under this heading had no more cognitive states than a grilled kipper. It was also misleading — hype, essentially — because 'cognitive computing' suggested capabilities that AI systems did not have. (59)

The first sentence in this quotation is bad English. What our man should have written is: "the systems under this heading no more had cognitive states than a grilled kipper." By the way, this grammatic howler illustrates how word order, and thus syntax, can affect semantics.  What Susskind wrote is false since it implies that the kipper had cognitive states. My corrected sentence is true.

Pedantry aside, the point is that computers don't know anything. They are never in cognitive states. So say I, and I think Susskind is inclined to agree. Of course, I will have to argue this out.

Do AI Systems Hallucinate?

More 'slop talk' from  the AI boys, as Susskind clearly appreciates:

The same goes for 'hallucinations', a term which is widely used to refer to the errors and fabrications to which generative AI systems are prone. At best, this is another metaphor, and at worst the word suggests cognitive states that are quite absent. Hallucinations are mistaken perceptions of sensory experiences. This really isn't what's going on when ChatGPT churns out gobbledygook. (59, italics added)

I agree, except for the sentence I set in italics. There is nothing wrong with the grammar of the sentence. But the formulation is philosophically lame. I would put it like this, "An hallucination is an object-directed experience, the object of which  does not exist." For example, the proverbial drunk who visually hallucinates a pink rat is living through an occurrent sensory mental state that is directed upon a nonexistent object.  He cannot be mistaken about his inner perception of his sensory state; what he is mistaken about is the existence in the external world of the intentional object of his sensory state.

There is also the question whether all hallucinations are sensory. I don't think so. Later. It's time for lunch.

Quibbles aside, Susskind's book is excellent, inexpensive, and required reading if you are serious about these weighty questions.

The Universe Groks Itself and the Aporetics of Artificial Intelligence

I will cite a couple of articles for you to ponder.  Malcolm Pollack sends us to one in which scientists find their need for meaning satisfied by their cosmological inquiries.  Subtitle: “The stars made our minds, and now our minds look back.”

The idea is that in the 14 billion years since the Big Bang, the universe has become aware of itself in us. The big bad dualisms of Mind and Matter, Subject and Object are biting the dust. We belong here in the material universe. We are its eyes. Our origin in star matter is higher origin enough to satisfy the needs of the spirit. 

Malcolm sounds an appropriately skeptical note: "Grist for the mill – scientists yearning for spiritual comfort and doing the best their religion allows: waking up on third base and thinking they've hit a triple." A brilliant quip.

Another friend of mine, nearing the end of the sublunary trail, beset by maladies physical and spiritual, tells me that we are in Hell here and now. He exaggerates, no doubt, but as far as evaluations of our predicament go, it is closer to the truth than a scientistic optimism blind to the horrors of this life.  What do you say when nature puts your eyes out, or when dementia does a Biden on your brain, or nature has you by the balls in the torture chamber? 

What must it be like to be a "refuge on the unarmed road of flight" after Russian missiles have destroyed your town and killed your family? 

Does the cosmos come to self-awareness in us? If it does, then perhaps it ought to figure out a way to restore itself to the nonbeing whence it sprang.

The other article to ponder, Two Paths for A.I. (The New Yorker), offers pessimistic and optimistic predictions about advanced AI.

If the AI pessimists are right, then it is bad news for the nature-mystical science optimists featured in the first article: in a few years, our advanced technology, self-replicating and recursively self-improving, may well restore the cosmos to (epistemic) darkness, though not to non-being. 

I am operating with a double-barreled assumption: mind and meaning cannot emerge from the wetware of brains or from the hardware of computers.  You can no more get mind and meaning from matter than blood from a stone. Mind and Meaning have a Higher Origin. Can I prove it? No. Can you disprove it? No. But you can reasonably believe it, and I'd say you are better off believing it than not believing it.  The will comes into it. (That's becoming a signature phrase of mine.) Pragmatics comes into it. The will to believe.

And it doesn't matter  how complexly organized the hunk of matter is.  Metabasis eis allo genos? No way, Matty.

Theme music: Third Stone from the Sun.

Chimes of Freedom.

Biometric Authentication

I use multifactor authentication for access to many of the sites I visit, but conservatives are cautious by nature. So I am not inclined to spring for biometric authentication, some of the hazards of which are discussed here.  The alacrity with which the young adopt the latest trends is evidence of their inherent excess of trust, lack of critical caution, and in many, out-and-out pollyannishness. "Many companies and organizations are implementing biometric authentication for enhanced security and convenience, with deployment rising to 79% from 27% in just a few years." (AI-generated claim) 

In tech we trust? What, me worry? What could possibly go wrong?

Convenience? Whose?

Future shock is upon us in this brave new world. I allude to the title of two books you should have read by now.

Practice situational awareness across all sectors and in every situation.

Jews, Muslims, Science and Technology

Which group has contributed more to science and technology? Jews or Muslims?  And why?

Question prompted by this:

Today, Jewish and Israeli MIT students were physically prevented from attending class by a hostile group of pro-Hamas and anti-Israel MIT students that call themselves the CAA [Coalition Against Apartheid, apparently].

The Biden Maladministration is Placing Us in Grave Danger

No, you useful idiots, white supremacy is not the greatest threat we face: it is no threat at all since it doesn't exist. A real threat we face, and a very serious one, is posed by an EMP directed against our unprotected grid.  HT to JSO for the following two videos. 

How would a nuclear EMP affect the power grid?

How long would society last during a total grid collapse?

Addendum 4/12:

A reader refers us to Are Aircraft Carriers Unsinkable? and comments, 

The whole article is hair-raising, but this jumped out at me:

About the same time that tensions were rising over Nancy Pelosi’s visit to Taiwan, reposts of a 2020 article by Major General Ed Thomas, the Commander of the Air Force’s Recruiting Service, began to pop up in the media.  The headline?  “Eighty-six percent of Air Force pilots are white men. Here’s why this needs to change.” Too many white men? Is that what our generals worry about? Like many other military top-brass, Major General Thomas seems to think that diversity wins wars.  That’s why he put “improving diversity” on “the top of my to-do-list.”

What if, in the meritocracy the armed services are supposed to be, not enough nonwhites cut the mustard? Promote them anyway?
 
BV: This goes to the heart of the matter, namely the assault on merit in favor of 'diversity,' 'equity,' and 'inclusion'  which in practice amount to governmentally enforced proportional  representation, equality of outcome, and exclusion of 'racists' and 'white supremacists.'  The destructive DEI agenda is predicated upon reality denial, in particular, the denial of the reality that we are not equal either as individuals or as groups in those empirically measurable respects  that bear upon qualification for jobs and positions.   The DEI agenda is dangerous and destructive because it allows the physically feeble and disabled, the mentally incompetent, the morally defective, and the factually ignorant and untrained to occupy high positions in government and industry. But 'allow' is too weak a word in this context; 'promote' is more to the point.  
 
One reason this is dangerous is that our geopolitical adversaries do not subscribe to the destructive DEI ideology. While we self-enstupidate, they salivate. 
 
How explain the popularity of DEI among the useful idiots?  I suggest that it is due, at least in part, to the 'feel-good' nature of the DEI 'reforms.' They are found very appealing in this, the Age of Feeling.
 
How explain the popularity of DEI among the drivers of the demented doctrine? In the case of Major General Ed Thomas and his ilk it is probably sheer careerism. They go along not just to get along but to advance themselves career-wise, and the nation and the world be damned. It strains credulity to think that they actually believe the rubbish.
 
There's a bad moon rising, and trouble's on the way. 

The Ultimate Replacement

I am not referring to the ethno-masochistic self-replacement of whites who have lost their 'mojo,' but the replacement of humanity by soulless robots. I speak not merely of the replacement of a uniquely clever species of land mammal. I speak of the erasure of spirit in the material world by the elimination of those spirits in animal bodies that we are. And to make the dark thought darker: there may be little or nothing we can do about it. Our technology has a life of its own and is re-creating us in its own image. What was created by us to serve us will master and then replace us.

Filed under: Dark Thoughts

A Warning from Elon Musk

Here

There is no political solution, not only for the reasons that Musk gives, but also because it is not the best who rise in politics but often the very worst.  That is certainly true in the USA at present.  The current administration is characterized by blatant mendacity, corruption, sheer stupidity, and mental incompetence.