{"id":13485,"date":"2025-10-16T15:46:56","date_gmt":"2025-10-16T22:46:56","guid":{"rendered":"https:\/\/maverickphilosopher.blog\/?p=13485"},"modified":"2025-10-17T14:25:54","modified_gmt":"2025-10-17T21:25:54","slug":"mind-without-consciousness","status":"publish","type":"post","link":"https:\/\/maverickphilosopher.blog\/index.php\/2025\/10\/16\/mind-without-consciousness\/","title":{"rendered":"Mind without Consciousness?"},"content":{"rendered":"<p><span style=\"font-size: 14pt;\">David Brightly in a recent comment writes,<\/span><\/p>\n<blockquote>\n<p style=\"text-align: justify;\"><span style=\"font-size: 10pt;\"><span style=\"font-size: 12pt;\">[Laird] Addis says<\/span>,<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 10pt;\">The very notion of language as a representational system presupposes the notion of mind, but not vice versa.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 12pt;\">I can agree with that, but why should it presuppose consciousness too?<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 12pt;\">In a comment under\u00a0<a href=\"https:\/\/maverickphilosopher.blog\/index.php\/2021\/08\/02\/could-scollay-square-be-a-nonexistent-object\/\" rel=\"ugc\">this piece<\/a>\u00a0you write,<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 10pt;\">Examples like this cause trouble for those divide-and-conquerers who want to prise\u00a0 intentionality apart from consciousness with its qualia, subjectivity, and what-it-is-like-ness,\u00a0 and work on the problems separately, the first problem being supposedly tractable while the second is called the (intractable) Hard Problem (David Chalmers). Both are hard as hell and they cannot be separated. See Colin McGinn, Galen Strawson, et al.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 12pt;\">Could you say a bit more on this?<\/span><\/p>\n<\/blockquote>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">I&#8217;ll try.\u00a0 You grant that representation presupposes mind, but wonder why it should also presuppose consciousness.\u00a0 Why can&#8217;t there be a representational system that lacks consciousness?\u00a0 Why can&#8217;t there be an insentient, and thus unconscious, machine that represents objects and states of affairs external to itself? Fair question!\u00a0<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">Here is an example to make the problem jump out at you. Suppose you have an advanced AI-driven robot, an artificial French maid, let us assume, which is never in any sentient state, that is, it <em>never feels anything<\/em>.\u00a0 You could say, but only analogically, that the robot is in various &#8216;sensory&#8217; states, states\u00a0 caused by the causal impacts of physical objects against its &#8216;sensory&#8217; transducers whether optical, auditory, tactile, kinaesthetic . . . but these &#8216;sensory&#8217; states\u00a0 would have no associated qualitative or phenomenological features.\u00a0 Remember Herbert Feigl? In Feiglian terms, there would be no &#8216;raw feels&#8217; in the bot should her owner &#8216;feel her up.&#8217;\u00a0 Surely you have heard of Thomas Nagel. In Nagelian terms, there would be nothing it is like for the bot to have her breasts fondled.\u00a0 If her owner fondles the breasts of his robotic French maid, she feels nothing even though she is programmed to respond appropriately to the causal impacts via her linguistic and other behavior.\u00a0 \u00a0&#8220;What are you doing, sir? I may be a bot but I am not a sex bot! Hands off!&#8221; If the owner had to operate upon her, he would not need to put her under an anaesthetic. And this for the simple reason that she is nothing but an insensate machine.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">I hope Brightly agrees with me that verbal and nonverbal behavior, whether by robots or by us, are not constitutive of\u00a0 genuine sentient states. I hope he rejects analytical (as opposed to methodological) behaviorism, according to which feeling pain, for example,\u00a0 is nothing more than exhibiting verbal or nonverbal pain-behavior.\u00a0 I hope he agrees with me that the bot I described is a zombie (as philosophers use this term) and that we are not zombies.\u00a0\u00a0<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">But even if he agrees with all that, there remains the question: Is the robot, although wholly insentient, the subject of mental states, where mental states are intentional (object-directed) states?\u00a0 If yes, then we can have mind without consciousness, intrinsic intentionality without subjectivity, content without consciousness.<\/span><\/p>\n<p><span style=\"font-size: 14pt;\">Here are some materials for an argument contra.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">P1 Representation is a species of intentionality. Representational states of a system (whether an organism, a machine, a spiritual substance, whatever) are intentional or object-directed states.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">P2 Such states involve <em>contents<\/em> that mediate between the subject of the state and the thing toward which the state is directed.\u00a0 Contents are the <em>cogitata<\/em> in the following schema: <strong>Ego-cogito-cogitatum qua cogitatum-res<\/strong>.\u00a0<\/span><span style=\"font-size: 14pt;\">Note that &#8216;directed toward&#8217; and &#8216;object-directed&#8217; are being used here in such a way as to allow the possibility that there is nothing in reality, no <em>res<\/em>, to which these states are directed.\u00a0 Directedness is an <em>intrinsic<\/em> feature of intentional states, not a relational one.\u00a0 This means that the directedness of an object-directed state is what it is whether or not there is anything in the external world to which the state is directed. See <a href=\"https:\/\/maverickphilosopher.blog\/index.php\/2021\/07\/17\/object-directedness-and-object-dependence\/\">Object-Directedness and Object-Dependence<\/a> for more on this.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">As for the contents, they present the thing to the subject of the state. We can think of contents as modes of presentation, as <em>Darstellungsweisen<\/em> in something close to Frege&#8217;s sense.\u00a0 \u00a0 \u00a0Necessarily, no state without a content, and no content without a state.\u00a0 (Compare the strict correlation of <em>noesis<\/em> and <em>noema<\/em> in Husserl.) Suppose I undergo an experience which is the seeing <em>as of<\/em>\u00a0 a tree.\u00a0 I am the subject of the representational state of seeing and the thing to which the state is directed, if it exists, is a tree in nature.\u00a0 The &#8216;<em>as of<\/em>&#8216; locution signals that the thing intended in the state may or may not exist in reality.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">P3 But the tree, even if it exists in the external world, is not given, i.e., does not appear to the subject, with all its aspects, properties, and relations, but only with some of them. John Searle speaks of the &#8220;aspectual shape&#8221; of intentional states. Whenever we perceive anything or think about anything, we always do so under some aspects <strong>and not others<\/strong>.\u00a0 These aspectual features are essential to the intentional state; they are part of what make intentional\u00a0 states the states that they are. (<em>The Rediscovery of the Mind<\/em>, MIT Press, 1992, pp. 156-157) The phrase I bolded implies that no intentional state that succeeds in targeting a thing (<em>res<\/em>) in external world is such that every aspect of\u00a0 the thing is before the mind of the person in the state.<\/span><\/p>\n<p style=\"text-align: justify;\">P4 <span style=\"font-size: 14pt;\">Intentional states are therefore not only necessarily\u00a0<em>of<\/em>\u00a0something; they are necessarily of something\u00a0<em>as<\/em> something.\u00a0 And given the finitude of the human mind, I want to underscore the fact that\u00a0 even if every F is a G, one\u00a0 can be aware of <em>x<\/em> as F without being aware of\u00a0 <em>x<\/em> as G.\u00a0\u00a0 Indeed, this is so even if necessarily (whether metaphysically or nomologically) every F is a G. Thus I can be aware of a moving object as a cat, without being aware of it as spatially extended, as an animal, as a mammal, as an animal that cools itself by panting as opposed to sweating, as my cat, as the same cat I saw an hour ago, etc.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-size: 14pt;\">BRIGHTLY&#8217;S THEORY (as I understand it, in my own words.)<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">B1. There is a distinction between subpersonal and personal contents. Subpersonal contents exist without the benefit of consciousness and play their mediating role in representational states in wholly insentient machines such as the AI-driven robotic maid.\u00a0\u00a0<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">B2. We attribute subpersonal contents to machines of sufficient complexity and these attributions are correct in that these machines <em>really are<\/em> intentional\/representational systems.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">B3. While it is true that the only intentional (object-directed) states of which we humans are aware are conscious intentional states, that they are\u00a0 conscious is a merely contingent fact about them. Thus, &#8220;the conditions necessary and sufficient for content are <em>neutral<\/em> on the question whether the bearer of the content happens to be a conscious state. Indeed the very same range of contents that are possessed by conscious creatures could be possessed by creatures without a trace of consciousness.&#8221; (Colin McGinn, <em>The Problem of Consciousness<\/em>, Blackwell 1991, p. 32.<\/span><\/p>\n<p><span style=\"font-size: 14pt;\">MY THEORY<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">V1. There is no distinction between subpersonal and personal contents. All contents are contents of (belonging to) conscious states. Brentano taught that all consciousness is intentional, that every consciousness is a consciousness of something.\u00a0 I deny that, holding as I do that <a href=\"https:\/\/maverickphilosopher.blog\/index.php\/2020\/12\/12\/f-h-bradley-on-the-non-intentionality-of-pleasure-and-pain\/\">some conscious states are non-intentional.<\/a> But I do subscribe to the <em>Converse Brentano Thesis<\/em>, namely, that all intentionality is conscious. In a slogan adapted from McGinn though not quite endorsed by him, <em>There is no of-ness without what-it-is-like-ness<\/em>. This implies that only conscious beings can be the subjects of original or intrinsic intentionality.\u00a0 And so the\u00a0 robotic maid is not the subject of intentional\/representational states. The same goes for the cerebral processes transpiring\u00a0 in us humans when said processes are viewed as purely material: they are not about anything because there is nothing it is like to be them.\u00a0 Whether one is a meat head or a silicon head, no content without consciousness! Let that be our battle cry.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">And so, when the robotic maid&#8217;s voice synthesizer &#8216;says&#8217; &#8216;This shelf is so dusty!&#8217; it is only AS IF &#8216;she&#8217; is thereby referring to a state of affairs and its constituents, the shelf and the dust.\u00a0 &#8216;She&#8217; is not saying anything, <em>sensu stricto<\/em>, but merely making sounds to which <em>we<\/em> original-<em>Sinn<\/em>-ers, attribute meaning and reference. Thinking reference (intentionality) enjoys primacy over linguistic reference. Cogitation trumps word-slinging. The latter is parasitic upon the former.\u00a0 Language without mind is just scribbles, pixels, chalk marks, indentations in stone, ones and zeros. As Mr. Natural might have said, &#8220;It don&#8217;t mean shit.&#8221; <em>An sich, und sensu stricto.<\/em><\/span><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/maverickphilosopher.blog\/wp-content\/uploads\/2025\/10\/Mr-Natural.jpg\" \/><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">V2. Our attribution of intentionality to insentient systems is merely AS IF.\u00a0 The robot in my example behaves <em>as if<\/em> it is really cognizant of states of affairs such as the dustiness of the book shelves and <em>as if<\/em> it really wants to please its boss while really fearing his sexual advances.\u00a0 But all the real intentionality is in us who makes the attributions.\u00a0 And please note that our attributing of intentionality to systems whether silicon-based or meat-based that cannot host it is itself real intentionality. It follows, <em>pace<\/em> Daniel Dennett, that intentionality cannot be ascriptive all the way down (or up). But Dennett&#8217;s ascriptivist theory of intentionality calls for a separate post.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">V3. It is not merely a contingent fact about the intentional state that we our introspectively aware of that they are conscious states; it is essential to them.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">NOW, have I refuted Brightly ? No! I have arranged a <a href=\"https:\/\/williamfvallicella.substack.com\/p\/the-concept-of-standoff-in-philosophy-c1b?utm_source=publication-search\">standoff<\/a>.\u00a0 I have not refuted but merely neutralized his position by showing that it is not rationally coercive.\u00a0 I have done this by sketching a rationally acceptable alternative. We have made progress in that we now both better understand the problems we are discussing and our different approaches to them.<\/span><\/p>\n<p style=\"text-align: justify;\"><span style=\"font-size: 14pt;\">Can we break standoff? I doubt it, but we shall see.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>David Brightly in a recent comment writes, [Laird] Addis says, The very notion of language as a representational system presupposes the notion of mind, but not vice versa. I can agree with that, but why should it presuppose consciousness too? In a comment under\u00a0this piece\u00a0you write, Examples like this cause trouble for those divide-and-conquerers who &hellip; <a href=\"https:\/\/maverickphilosopher.blog\/index.php\/2025\/10\/16\/mind-without-consciousness\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Mind without Consciousness?&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":13523,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[96,100,54,405],"tags":[],"class_list":["post-13485","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-consciousness-and-qualia","category-intentionality","category-mind","category-representation"],"_links":{"self":[{"href":"https:\/\/maverickphilosopher.blog\/index.php\/wp-json\/wp\/v2\/posts\/13485","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/maverickphilosopher.blog\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/maverickphilosopher.blog\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/maverickphilosopher.blog\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/maverickphilosopher.blog\/index.php\/wp-json\/wp\/v2\/comments?post=13485"}],"version-history":[{"count":12,"href":"https:\/\/maverickphilosopher.blog\/index.php\/wp-json\/wp\/v2\/posts\/13485\/revisions"}],"predecessor-version":[{"id":13538,"href":"https:\/\/maverickphilosopher.blog\/index.php\/wp-json\/wp\/v2\/posts\/13485\/revisions\/13538"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/maverickphilosopher.blog\/index.php\/wp-json\/wp\/v2\/media\/13523"}],"wp:attachment":[{"href":"https:\/\/maverickphilosopher.blog\/index.php\/wp-json\/wp\/v2\/media?parent=13485"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/maverickphilosopher.blog\/index.php\/wp-json\/wp\/v2\/categories?post=13485"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/maverickphilosopher.blog\/index.php\/wp-json\/wp\/v2\/tags?post=13485"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}