A Substack quickie.
Author: Bill Vallicella
The Philosopher
A philosopher is one sensitive to the strangeness of the ordinary, and open to the puzzles hidden in platitudes.
Is A.I. Killing the World Wide Web?
From The Economist:
As AI changes how people browse, it is altering the economic bargain at the heart of the internet. Human traffic has long been monetised using online advertising; now that traffic is drying up. Content producers are urgently trying to find new ways to make AI companies pay them for information. If they cannot, the open web may evolve into something very different.
[. . .]
“The nature of the internet has completely changed,” says Prashanth Chandrasekar, chief executive of Stack Overflow, best known as an online forum for coders. “AI is basically choking off traffic to most content sites,” he says. With fewer visitors, Stack Overflow is seeing fewer questions posted on its message boards. Wikipedia, also powered by enthusiasts, warns that AI-generated summaries without attribution “block pathways for people to access…and contribute to” the site.
This won't affect me. My writing is a labor of love. I don't try to make money from it. I don't need to. I've made mine. You could call me a "made man." I may, however, monetize my Substack. It seems churlish to refuse the pledges that readers have kindly made.
Against Verbal Inflation
There's too damned much 'reaching out' going on.
A Substack protest.
Intelligence, Cognition, Hallucination, and AI: Notes on Susskind
Herewith, a first batch of notes on Richard Susskind, How to Think About AI: A Guide for the Perplexed (Oxford 2025). I thank the multi-talented Brian Bosse for steering me toward this excellent book. Being a terminological stickler, I thought I'd begin this series of posts with some linguistic and conceptual questions. We need to define terms, make distinctions, and identify fallacies. I use double quotation marks to quote, and single to mention, sneer, and indicate semantic extensions. Material within brackets is my interpolation. I begin with a fallacy that I myself have fallen victim to.
The AI Fallacy: "the mistaken assumption that the only way to get machines to perform at the level of the best humans is somehow to replicate the way humans work." (54) "The error is failing to recognize that AI systems do not [or need not] mimic or replicate human reasoning." The preceding sentence is true, but only if the bracketed material is added.
Intellectual honesty demands that I tax myself with having committed the AI Fallacy. I wrote:
The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans. Artificial intelligence is simulated intelligence.
This is true of first-generation systems only. These systems "required human 'knowledge engineers' to mine the jewels from the heads of 'domain experts' and convert their knowledge into decision trees" . . . whereas "second-generation AI systems" mine jewels "from vast oceans of data" and "directly detect patterns, trends, and relationships in these oceans of data." (17-18, italics added) These Gen-2 systems 'learn' from all this data "without needing to be explicitly programmed." (18) This is called 'machine learning' because the machine itself is 'learning.' Note the 'raised eyebrows' which raise the question: Are these systems really learning?
So what I quoted myself as saying was right when I was a student of engineering in the late '60s, early '70s, but it is outdated now. There were actually two things we didn't appreciate back then. One was the impact of the exponential, not linear, increase in the processing power of computers. If you are not familiar with the difference between linear and exponential functions, here is a brief intro. IBM's Deep Blue in 1997 bested Gary Kasparov, the quondam world chess champion. Grandmaster Kasparov was beaten by exponentially fast brute force processing; no human chess player can evaluate 300 million possible moves in one second.
The second factor is even more important for understanding today's AI systems. Back in the day it was thought that practical AI could be delivered by assembling "huge decision trees that captured the apparent lines of reasoning of human experts . . . ." (17) But that was Gen-1 thinking as I have already explained.
More needs to be said, but I want to move on to three other words tossed around in contemporary AI jargon.
Are AI Systems Intelligent?
Here is what I wrote in May:
The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans. Artificial intelligence is simulated intelligence. And just as artificial flowers (made of plastic say) are not flowers, artificially intelligent beings are not intelligent. 'Artificial' in 'artificial intelligence' is an alienans adjective.
Perhaps you have never heard of such an adjective.
A very clear example of an alienans adjective is 'decoy' in 'decoy duck.' A decoy duck is not a duck even if it walks likes a duck, talks like a duck, etc., as the often mindlessly quoted old saying goes. Why not? Because it is a piece of wood painted and tricked out to look like a duck to a duck so as to lure real ducks into the range of the hunters' shotguns. The real ducks are the ducks that occur in nature. The hunters want to chow down on duck meat, not wood. A decoy duck is not a kind of duck any more than artificial leather is a kind of leather. Leather comes in different kinds: cow hide, horse hide, etc., but artificial leather such as Naugahyde is not a kind of leather. Same goes for faux marble and false teeth and falsies. Faux (false) marble is not marble. Fool's gold is not gold but pyrite or iron sulfide. And while false teeth might be functionally equivalent to real or natural teeth, they are not real or true teeth. That is why they are called false teeth.
An artificial heart may be the functional equivalent of a healthy biologically human heart, inasmuch as it pumps blood just as well as a biologically human heart, but it is not a biologically human heart. It is artificial because artifactual, man-made, thus not natural. I am presupposing that there is a deep difference between the natural and the artificial and that homo faber, man the maker, cannot obliterate that distinction by replacing everything natural with something artificial.
I now admit, thanks to Susskind, that the bit about simulation quoted above commits what he calls the AI Fallacy, i.e., "the mistaken assumption that the only way to get machines to perform at the level of the best humans is somehow to replicate the way that humans work." (54) I also admit that said fallacy is a fallacy. The question for me now is whether I should retract my assertion that AI systems, since they are artificially intelligent, are not really intelligent. Or is it logically consistent to affirm both of the following?
a) It is a mistake to think that we can get the outcomes we want from AI systems only if we can get them to process information in the same way that we humans process information.
and
b) AI systems are not really intelligent.
I think the two propositions are logically consistent, i.e., that they can both be true, and I think that in fact both are true. But in affirming (b) I am contradicting the "Godfather of AI," Geoffrey Hinton. Yikes! He maintains that AI systems are all of the following: intelligent, more intelligent than us, actually conscious, potentially self-conscious, have experiences, and are the subjects of gen-u-ine volitional states. They have now or will have the ability to set goals and pursue purposes, their own purposes, whether or not they are also our purposes. If so, we might become the tools of our tools! They might have it in for us!
Note that if AI systems are more intelligent than us, then they are intelligent in the same sense in which we are intelligent, but to a greater degree. Now we are really, naturally, intelligent, or at least some of us are. Thus Hinton is committed to saying that artificial intelligence is identical to real intelligence, as we experience it in ourselves in the first-person way. He thinks that advanced AI systems understand, assess, evaluate, judge, just as we do — but they do it better!
Now I deny that AI systems are intelligent, and I deny that they ever will be. So I stick to my assertion that 'artificial' in 'artificial intelligence' is an alienans adjective. But to argue my case will require deep inquiry into the nature of intelligence. That task is on this blogger's agenda. I suspect that Susskind will agree with my case. (Cf. pp. 59-60)
Cognitive Computing?
Our natural tendency is to anthropomorphize computing machines. This is at the root of the AI Fallacy, as Susskind points out. (58) But here I want to make a distinction between anthropocentrism and anthropomorphic projection. At the root of the AI Fallacy — the mistake of "thinking that AI systems have to copy the way humans work to achieve high-level performance" (58) — is anthropocentrism. This is what I take Susskind to mean by "anthropomorphize." We view computing machines from our point of view and think that they have to mimic, imitate, simulate what goes on in us for these machines to deliver the high-level outcomes we want.
We engage in anthropomorphic projection when we project into the machines states of mind that we know about in the first-person way, states of mind qualitatively identical to the states of mind that we encounter in ourselves, states of mind that I claim AI systems cannot possess. The might be what Hinton and the boys are doing. I think that Susskind might well agree with me about this. He says the following about the much bandied-about phrase 'cognitive computing':
It might have felt cutting-edge to use this term, but it was plainly wrong-headed: the systems under this heading had no more cognitive states than a grilled kipper. It was also misleading — hype, essentially — because 'cognitive computing' suggested capabilities that AI systems did not have. (59)
The first sentence in this quotation is bad English. What our man should have written is: "the systems under this heading no more had cognitive states than a grilled kipper." By the way, this grammatic howler illustrates how word order, and thus syntax, can affect semantics. What Susskind wrote is false since it implies that the kipper had cognitive states. My corrected sentence is true.
Pedantry aside, the point is that computers don't know anything. They are never in cognitive states. So say I, and I think Susskind is inclined to agree. Of course, I will have to argue this out.
Do AI Systems Hallucinate?
More 'slop talk' from the AI boys, as Susskind clearly appreciates:
The same goes for 'hallucinations', a term which is widely used to refer to the errors and fabrications to which generative AI systems are prone. At best, this is another metaphor, and at worst the word suggests cognitive states that are quite absent. Hallucinations are mistaken perceptions of sensory experiences. This really isn't what's going on when ChatGPT churns out gobbledygook. (59, italics added)
I agree, except for the sentence I set in italics. There is nothing wrong with the grammar of the sentence. But the formulation is philosophically lame. I would put it like this, "An hallucination is an object-directed experience, the object of which does not exist." For example, the proverbial drunk who visually hallucinates a pink rat is living through an occurrent sensory mental state that is directed upon a nonexistent object. He cannot be mistaken about his inner perception of his sensory state; what he is mistaken about is the existence in the external world of the intentional object of his sensory state.
There is also the question whether all hallucinations are sensory. I don't think so. Later. It's time for lunch.
Quibbles aside, Susskind's book is excellent, inexpensive, and required reading if you are serious about these weighty questions.
Another Transparently Worthless Argument that Justifies the Questioning of Motives
Abusus non tollit usum
The abuse of a thing is no argument against its proper use. For example, the occasional abuse of State power by its agents is entirely consistent with the State's moral legitimacy.
You say you don't like the roundup of illegal aliens and their incarceration in detention centers such as the one in the heart of the Florida Everglades prior to their deportation? I don't like the roundup either, and it is to be expected that abuses will occur when a small minority of ICE officials overstep their legitimate authority. But the rule of law must be upheld.
It is also perfectly plain that the roundup would not be necessary had the previous (mal)administration enforced the borders and upheld the rule of law. So the Dems should look in the mirror and own the mess that they have created.
Nancy Pelosi spoke truly when she said that no one is above the law, not even the President of the United States. What she said is true; too bad she didn't meant it.
Abusus non tollit usum.
Sun Tzu: 21 Principles of the Art of War
Under 15 minutes. May help in the understanding of DJT's modus operandi et dominandi.
Trump Admin to Cut Off HEAD START for Illegals
WASHINGTON (AP) — The Trump administration will restrict immigrants in the country illegally from enrolling in Head Start, a federally funded preschool program, the Department of Health and Human Services announced Thursday. The move is part of a broad effort to limit access to federal benefits for immigrants who lack legal status.
Translation: Illegal aliens will no longer be allowed access to taxpayer dollars to which they have no right.
Dems will scream in protest and start doing what they reliably do, namely, lie. They will claim that the Administration is eliminating the Head Start program just as they are supposedly eliminating Medicaid.
I would begin to have some respect for our political enemies if they stopped lying and simply stated their adamant opposition to the USA as she was founded to be, and owned up to the fact that their goal is the "fundamental transformation" (Barack Hussein Obama) of the USA so as to bring it in line with what they think a nation ought to be. But they will not come clean. That is why I label them 'stealth ideologues.'
Am I a Body or Do I Have a Body?
In his last book, Mortality, the late Christopher Hitchens writes, "I don't have a body, I am a body." (86) He goes on to observe that he has "consciously and regularly acted as if this was not true." It is a curious fact that mortalists are among the worst abusers of the fleshly vehicle. But that is not my theme.
Is a person just his body? The meditation is best conducted in the first person: Am I just my body? Am I identical to my body? Am I numerically one and the same with my body, where body includes brain? Am I such that, whatever is true of my body is true of me, and vice versa? Let's start with some 'Moorean facts,' some undeniable platitudes.
Colin McGinn on Paradoxical Paradoxes
The indented material is from Colin McGinn's blog. My responses are flush left and in blue.
Paradoxes exist.
True.
Paradoxes belong either to the world or to our thought about the world.
True, if 'or' expresses exclusive disjunction.
They cannot belong to the world, because reality cannot be intrinsically paradoxical.
True. And so one ought to conclude that paradoxes reside in our thought about the world.
They cannot belong to our thought about the world, because then we would be able to alter our thought to avoid them (they cannot be intrinsic features of thought).
But surely we can alter our thought to avoid the paradoxes that reside in our thought about the world but not in the world.
Therefore, paradoxes don’t exist.
Non sequitur.
Therefore, paradoxes both exist and don’t exist.
Non sequitur. Although paradoxes do not exist in the world, in reality, they do exist in our thinking about the world, thinking that can be altered so as to avoid paradoxicality.
This is the paradox of paradoxes.
There is no such a paradox. It seems to me that McGinn is equivocating on 'paradox.' His first three assertions are all true if 'paradox' means logical contradiction. But for the fourth assertion to be true, McGinn cannot mean by 'paradox' logical contradiction.
The Paradox of the Smashed Vase will help me make my point.
Suppose you inadvertently knock over a priceless vase, smashing it to pieces. You say to the owner, "There's no real harm done; after all it's all still there." And then you support this outrageous claim by arguing:
1) There is nothing to the vase over and above the ceramic material that constitutes it.
2) When the vase is smashed, all the ceramic material that constitutes it remains in existence.
Therefore
3) The vase remains in existence after it is smashed.
"I don't owe you a penny!" (Adapted from Nicholas Rescher, Aporetics, U. of Pittsburgh Press, 2009, p. 91.)
This paradox arises from faulty thinking easily corrected. The mistake is to think that an artifact such as a vase is strictly and numerically identical to the matter that composes it. Not so: the arrangement or form of the matter must also be taken into consideration. This response is structurally the same as the much more detailed response I make to Peter van Inwagen's denial of the existence of artifacts.
Trump has Made News Great Again
Politics in hyperdrive. Who can keep up? And to what extent should one keep up? Here are a couple of articles that caught my eye:
The Islamic Republic's New Lease on Life. Mercifully brief, and very interesting. In Foreign Affairs, by one Mohammad Ayatollahi Tabaar. I'd be interested in Caiati's and Soriano's comments.
Elon Musk is America's Dumbest Smart Person. Roger Kimball is right, and he is a very good writer to boot, unlike so many journo-punks now churning out bad prose. How do I know Kimball is a good writer? It takes one to know one.
It's a funny world. My opinion of the 'pre-historic' Fetterman has gone up during the same period that my opinion of the engineering genius Musk has gone down.
I would put it like this. Donald Trump has injected the 'art of the deal' into politics. He has brought the transactional skills of a consummate businessman to bear with impressive results. He was politically naive but the seemingly providential interregnum provided him with a 'sabbatical' during which to 'bone up' with the help of brilliant advisors. That, and the stark contrast with the mentally inept, morally corrupt 'Traitor Joe' Biden have brought the Orange Man to power. Maybe God had a hand in it, or we just got lucky. I prefer not to bluster about the unknowable.
Musk, on the other hand, remains politically naive. You can't engineer politics.
As for how much time should be spent following the events of the day, see my Is it Rational to be Politically Ignorant?
Musk's third party doesn't have a chance, and in any case, Third Parties are nothing but discussion societies in political drag, as I argue over at the Stack.
A Warning to ChatGPT Users
I am gearing up for a series of posts on A. I. The topic has grabbed me by my epistemic shorthairs.
But for now, read this.
On Lame Appeals for Civility
Trey Gowdy issued one on his show last night. The man needs to stiffen his spine and realize that our political opponents are enemies with whom we share insufficient common ground for productive debate. They don't need debating but defeating. He did guest a Dem pol who talked some sense and seemed decent, but the guy was an outlier who apparently hasn't yet grasped that his party is and has been for some time a hard-Left outfit.
Here at MavPhil my tone is 'edgier' than on Substack and on Facebook it is edgier still. A good writer can write in different tones and voices depending on his audience.
See my Leftists and Civility over at the Stack for a measured partial statement of my views on this topic.
Russell’s Paradox Explained
1. From a contradiction, anything follows. Ex contradictione quod libet. Another way of putting it would be to say that every argument having contradictory premises is valid. 'Valid' is a technical term. An argument A is valid =df no argument of A's form has true premises and a false conclusion. Now if A has two premises and they contradict each other, then the Law of Non-Contradiction (LNC) assures us that one of the premises must be false. It follows that no argument of A's form can have true premises and a false conclusion.
As long as 'valid' is understood as a technical term having all and only the meaning it is defined as having, then there should not be any trouble understanding how every argument with contradictory premises is valid.
Now consider this derivation:
a. Al is fat and Al is not fat.
b. Al is fat. (From (a) by Simplification)
c. Al is fat or Bush is blind (From (b) by Addition)
d. Al is not fat. (From (a) by Commutation and Simplification)
e. Bush is blind. (From (c) and (d) by Disjunctive Syllogism)
This illustrates how any proposition follows from a contradiction.
2. Now if Russell's Paradox is a contradiction, then set theory harbors a contradiction. And if anything follows from a contradiction, this is a serious problem for the logicist program of reducing all of mathematics to set theory.
3. Unrestricted Comprehension is the intuitively attractive idea that for any condition, or open sentence, or propositional function, there is a corresponding set. Thus, corresponding to the condition 'x is a cat' there is the set {x: x is a cat}, in plain English, the set of all cats. Intuitively, it seems that no matter how strange or complex the condition, there ought to be a set of things that satisfy the condition. Thus, corresponding to 'x is either an apple or a sparkplug' there is the set of all x such that x is either an apple or a sparkplug. Unrestricted Comprehension appears to be self-evident.
4. Now consider 'x is not in my pocket.' That condition picks out the set S of all things not in my pocket. Thus my wife and the Eiffel Tower are members of S. But so is S! The set of all things not in my pocket is not in my pocket. Thus S is a member of S. S is a self-membered set. But other sets are non-self-membered. The set of philosophers, for example, is not a member of itself. No set is a philosopher.
5. Now consider R, the set of all non-self-membered sets. Is R self-membered or not? Clearly, R is self-membered if and only if R is not self-membered — which is a contradiction.
This contradiction is known in the trade as Russell's Paradox. The name, I'd say, is a misnomer. It ought to be called Russell's Antinomy since a paradox need not issue in a contradiction.
6. It is easy to see that the antinomy cannot arise without the Unrestricted Comprehension axiom which implies that, corresponding to the condition 'x is the set of all non-self-membered sets' there corresponds the set R. So one solution to the antinomy is via rejection of Unrestricted Comprehension.
Why not reject LNC instead?
