21 thoughts on “The Primacy of Intentionality over the Linguistic Revisted”

  1. Hi Bill

    Isn’t this point in need of more support and/or elaboration?
    >C. There can be mind without language, but no language without mind.

    Today’s LLMs are getting close to exactly the state that the second clause of C denies. They are , or soon would be, computational entities that can be described as “languages without (their own!) minds”. Addis’ quote doesn’t contain any argument to the contrary.

    1. LLM’s have their linguistic intensionality only derivatively, by way of the (primary intensionality of the) human minds that code them.

      In other words, they are simply ventriloquist puppets.

  2. John “ventriloquist puppets” would never add anything that their puppet master did not actually say (or pushed through his throat and vocal chords if you prefer). LLMs synthesize and provide statistically derived context dependent linguistic outputs (that in many cases can be labelled “highly relevant”). I understand the rhetoric force of your comparison but factually we have a case of language without a mind. Clearly the current LLMs have past the classic behavioural Turing test.

  3. Dmitri,

    Addis: “. . . it is unintelligible to suppose the existence of beings who are using language in all of its representative functions and who are also lacking in conscious states. The very notion of language as a representational system presupposes the notion of mind, but not vice versa.”

    I agree with that. A LLM can manipulate linguistic tokens but I see no sense in saying that it represents anything to itself by these manipulations. As John says, they are something like the puppets of a ventriloquist. A really good ventriloquist has the ability to create the illusion in the audience that the dummy is expressing verbally its own thoughts, when it is doing no such thing. The dummy has no thoughts. All the thoughts come from the mind of the ventriloquist.

    Or consider a printed encyclopedia. Does it ‘contain knowledge’? Does it ‘know anything’? No. Do the sentences in an encyclopedia express the encyclopedia’s thoughts in the way the sentences I am now writing express my thoughts? Of course not. It has no thoughts, so it can’t express any.

    The same goes for a digitized encyclopedia that was programmed to ‘learn’ — note the sneer quotes — from every available source of digitized ‘information’ — more sneer quotes.

    Why should passing a Turing test be a criterion of mind?

    As you may have heard, John Searle has died. I am tempted to review some of his work in this area.

  4. Hi Bill

    Re Searle’s death — there is a decent obituary in the Guardian. https://www.theguardian.com/world/2025/oct/05/john-searle-obituary
    His ex-friend Ned Block spoke poorly about Searle mere days after his death on Leiter’s blog…

    Re LLMs/Wikipedia comparison: I don’t think it is a good analogy. Wikipedia articles are directly written by people. In the case of LLMs there is a probabilistic algorithm that parses and rearranges vast amounts of data in a way that may or, more likely, may not be similar to the way our brains are processing the data. The processing algorithm and LLM’s software architecture were designed by many smart researchers and in that sense LLMs language is derivative. But ChatGPTs and Geminis actively use language and do it mostly adequately without having their own minds. LLMs use the language and their verbal outputs are driven by indeterminate algorithms. Nothing of the sort happens in Wikipedia. The Chinese room argument is also, arguably though, based on a different idea — lack of understanding the processed symbols is assumed, not argued for. I think it remains to be seen empirically whether something like LLM kind of processing is happening in our brains (and if not it is not important in this context). When more capable LLMs would be properly embedded in their robustly functioning robotic bodies I believe that our judgments about whether they are really thinking could change dramatically.

    1. “Re LLMs/Wikipedia comparison: I don’t think it is a good analogy. Wikipedia articles are directly written by people.”

      The entire internet is “directly written by people”; every scintilla of data on the interwebs mined by every AI scraper, is the product of a human – and, thus, intenisonal – mind, as are the various algorithms directing the various AI’s how to look for whatever it is they seek.

      The internet contains no more knowledge than the 1955 edition of the Britannica.

      If Hume was right, and you can’t get an “ought” from an “is”, you certainly can’t get meaning from meaninglessness.

    2. Dmitri writes, making three main points:

      >>(1) But ChatGPTs and Geminis actively use language and do it mostly adequately without having their own minds. LLMs use the language and their verbal outputs are driven by indeterminate algorithms. Nothing of the sort happens in Wikipedia. You are of course right that there are important differences between LLMs and Wikipedia, but they are not relevant to the points about representation and understanding that Addis, Searle, and I are making.

      (2)The Chinese room argument is also, arguably though, based on a different idea — lack of understanding the processed symbols is assumed, not argued for. I think it remains to be seen empirically whether something like LLM kind of processing is happening in our brains (and if not it is not important in this context).

      (3)When more capable LLMs would be properly embedded in their robustly functioning robotic bodies I believe that our judgments about whether they are really thinking could change dramatically.<< Ad(1): What do you mean by "use language"? Suppose you write a program that does the following. You input an English word, and it puts out every English word that can be made from the letters in the input word. Input 'stop' and the output would be: stop, top, sop, pot, so, to. Would this be a case of 'using language' in your sense? Ad (2): We should discuss this separately. Ad (3) How do you distinguish between succumbing to the illusion that the AI-driven robot is really thinking and the perception that it IS really thinking?

  5. John I am not denying the obvious. Internet is just a network. Wikipedia is just a cluster of articles written by people. Neither has mind or knowledge. What I am saying is that LLMs literally illustrate that there can be language without mind (of the language speaker) — something that Addis explicitly claims is an impossibility.

    It is a separate question whether LLMs know anything like some of us or don’t know anything like Wikipedia. I happen to think that it is an open question that would be decided empirically in a collectively conventional mode, that is with detractors and all that.

    When LLMs become embodied with actual robotic bodies and coherent speech and what we call today reasoning abilities, it would be much more difficult to deny that they appear as thinking and knowing something.

    An acting, moving and reasoning LLM is not your brain-in-the-vat thought experiment. It is an active entity, an agent of sorts, at least partially autonomous. An intelligent zombie? Maybe. But the decision as to how we ought to classify this phenomenon would be a conventional one driven by many “influencers” from different spheres of human activity. Whether humanity survives this stage is altogether a different question.

    1. Dmitri, I think you are missing the point. Addis: “The very notion of language AS A REPRESENTATIONAL SYSTEM presupposes the notion of mind . . .” My emphasis. The phrase I put in CAPS could be understood in two ways, actually or potentially. Consider the sentence ‘Hitler came to power in 1933’ buried deep in some closed book on a library shelf. Does that very sentence (token, not type) actually represent anything? No. But if a person were to open the book and read the sentence, then it would represent a past state of affairs. The sentence’s actual representation presupposes mind. That’s the point. The representational power derives from the intrinsically or originally intentional power of the mind, such that the representative power of the linguistic token is either actual but derivative, or merely potential.

      1. I think I understand your and Addis’ point Bill. To extend your example with a relevant consideration of mine that is not captured in it: the sentence, any sentence, does not mean anything in itself – agree. It means something to the person looking at it in your example – agree (assuming he understands English, is in his right mind etc.)

        But let’s not assume that it means something ONLY to the person with a human mind. If you are saying something else and do not limit the possibility of intentionality to minds as we use this term today then it is a different story. But if you do, I personally don’t see a reason to rule out other intelligent forms of existence (agents, aliens, robots and whatever else they may be called). I do think that more advanced LLMs might actually be good enough to be pragmatically recognized as intelligent beings who understand the language without having actual minds (as we use this notion today). And I do believe that a pragmatic, reasonable convention driven recognition is the most appropriate form of recognition in this case. People will judge for themselves. Influencers will push their opinions. Hopefully the pragmatic judgment would be to the advantage of the good side.

        1. I don’t exclude the possibility of there being minds or persons that are not biologically human, extraterrestrials, for example. I am also open to the possibility of wholly unembodied minds: God, angels, demons. There may also be disembodied minds. It may be that after death we continue to exist as individual persons, but without bodies.

          So I am not saying that a sentence has meaning ONLY to a person with a human mind. ETs might exchange messages encoded in symbols unrecognizable by us. My point is the Searlean one that you can’t squeeze semantics out of syntactics. A syntactic string is meaningless apart from a mind.

          Now suppose you have an advanced AI-driven robot well-crafted to resemble a young French maid. She does what she’s told to do: serves you a drink, lights your cigar, pulls the Critique of Pure Reason from the shelf and hands it to you, puts the unused books back in their proper places. You quote a line in German, she translates it into French . . .

          By all external behavioral criteria, both verbal and bodily, it is AS IF ‘she’ is a person just like you are. Here’s one question: Does it follow that ‘she’ IS a person with all that personhood entails, including rights, duties, sentient states, intentional states? Does she deserve respect? Should ‘she’ be considered an end in herself in Kant’s sense?

          Is the bot’s satisfaction of the behavioral criteria sufficient to show that she really is a person?

          OR have you succumbed to an illusion if you come to regard her as a full-fledged person, as opposed to a highly advanced tool for you to use as you please and dispose of as you please?

          And now your wife walks into the room and sees you flirting with the maid. She shoots you both. How would that differ in the eyes of the positive law and and in the eyes of the moral law from your wife’s shooting both you and an inflatable doll you are ‘having sex’ with?

          1. I believe that pragmatic socio-behavioural criteria, taken as a whole roughly as in well designed Turing test and much wider than that — not in a crude Skinner-like manner — bear a lot of weight in deciding whether an entity, be it a doll or what not, is sentient or not. We actually use such criteria today when we decide whether a creature is sentient or not. After all, your cats and my dog did not tell us that they are sentient. We decided that or just accepted following an implicit social norm.

            Now we both like The Twilight Zone and the familiar shooting scenario it inspired in particular. What the courts would decide in the case you describe when we do have AI creatures behaving as elaborately and intelligently as the human kind is an open question. I tend to believe that pragmatical approach would prevail and “my wife” would get two life sentences (unless she has good lawyers like OJ Simpson had, but that’s a different conversation).

  6. Hello Bill, your discussion proceeds largely without reference to consciousness until we reach subthesis (C). There you quote Addis,

    ” it is unintelligible to suppose the existence of beings who are using language in all of its representative functions and who are also lacking in conscious states. ”

    What does Addis mean by the italicised phrase? Is he leaving open the intelligibility of there existing beings lacking consciousness that use language in some of its representative functions?

    1. Hi David,

      Substitute ‘any’ for all’ and you will catch his meaning. I take Addis to be saying that if X is using language in any of its representative, i.e., signifying functions, then X is actually or potentially conscious.

      Let X be my answering machine. You call me and get my machine. You hear a recorded message in my voice: “I am presently in Madrid and incommunicado.” A certain proposition is thereby conveyed to you via the answering machine but IT is not using language in any of its representative functions. I am using IT as part of a communicative exchange with any person who rings me up. It is I who am using language in one of its representative functions. I refer to Madrid using ‘Madrid.’ The machine refers to nothing.

      Simpler example. A sign reads “This way to the showers.” Does that sign, or any part of it, or any mark on it, refer to anything ? No. Don’t forget we are talking about original/intentionality here, not derived intentionality.

  7. The thread seems now to be vacillating between two entirely separate questions – ontological and epistemoligical.

    1) is there a “meaning” to sentences not inscribed by rational intellection (i.e. and perhaps less tendentiously, anything capable of making one thing, X, be a symbol for something else, Y), whether or not anyone does, or could, know?

    2) how would it be possible to know, either way?

    mine (and I believe Bill’s) original points were in answer to #1:

    “meaning” is intensional; intensionality is necessarily a function of rationality; therefore only rationality can suffuse marks (aural/visual, etc.) with meaning.)

    The most recent exchanges concern #2: how would we KNOW whether or not (e.g.) a sufficiently advanced AI had passed the Singularity, and become rational?

    I continue to believe in my position on #1.

    As for #2: what we currently take to be “persons” (the only seat of rationality of which we know) have a specifically human biological pedigree (despite wildly disparate intellectual and behavioural patterns between those individuals).

    if we wanted to expand the boundaries of “personality” to include, for example, programmed (and programmable) CPU’s ensconced within robot bodies, then i would just say that if it looks like a duck, and sounds like a duck…

    As a an outro, I will simply note that all of the so-called “fringe cases” of human-like computational intelligences depicted in our fantasist media (e.g. Ash and Bishop in the Alien franchise, Data in star trek, C3PO and R2 in star wards, und so weiter), were written by human persons precisely to be empirically indistinguishable from biological humans (in their intellectual abilities, anyway.)

    @Dmitri writes:

    “the sentence, any sentence, does not mean anything in itself – agree. It means something to the person looking at it”

  8. Thank you, Bill. Addis says,

    The very notion of language as a representational system presupposes the notion of mind, but not vice versa.

    I can agree with that, but why should it presuppose consciousness too?

    In a comment under this piece you write,

    Examples like this cause trouble for those divide-and-conquers who want to prise apart intentionality from consciousness with its qualia and work on the problems separately, the first problem being supposedly tractable while the second is called the Hard Problem (David Chalmers). Both are hard as hell and they cannot be separated. See Colin McGinn, Galen Strawson, et al.

    Could you say a bit more on this?

Leave a Reply to Bill Vallicella Cancel reply

Your email address will not be published. Required fields are marked *