Mind without Consciousness?

David Brightly in a recent comment writes,

[Laird] Addis says,

The very notion of language as a representational system presupposes the notion of mind, but not vice versa.

I can agree with that, but why should it presuppose consciousness too?

In a comment under this piece you write,

Examples like this cause trouble for those divide-and-conquerers who want to prise  intentionality apart from consciousness with its qualia, subjectivity, and what-it-is-like-ness,  and work on the problems separately, the first problem being supposedly tractable while the second is called the (intractable) Hard Problem (David Chalmers). Both are hard as hell and they cannot be separated. See Colin McGinn, Galen Strawson, et al.

Could you say a bit more on this?

I’ll try.  You grant that representation presupposes mind, but wonder why it should also presuppose consciousness.  Why can’t there be a representational system that lacks consciousness?  Why can’t there be an insentient, and thus unconscious, machine that represents objects and states of affairs external to itself? Fair question! 

Here is an example to make the problem jump out at you. Suppose you have an advanced AI-driven robot, an artificial French maid, let us assume, which is never in any sentient state, that is, it never feels anything.  You could say, but only analogically, that the robot is in various ‘sensory’ states, states  caused by the causal impacts of physical objects against its ‘sensory’ transducers whether optical, auditory, tactile, kinaesthetic . . . but these ‘sensory’ states  would have no associated qualitative or phenomenological features.  Remember Herbert Feigl? In Feiglian terms, there would be no ‘raw feels’ in the bot should her owner ‘feel her up.’  Surely you have heard of Thomas Nagel. In Nagelian terms, there would be nothing it is like for the bot to have her breasts fondled.  If her owner fondles the breasts of his robotic French maid, she feels nothing even though she is programmed to respond appropriately to the causal impacts via her linguistic and other behavior.   “What are you doing, sir? I may be a bot but I am not a sex bot! Hands off!” If the owner had to operate upon her, he would not need to put her under an anaesthetic. And this for the simple reason that she is nothing but an insensate machine.

I hope Brightly agrees with me that verbal and nonverbal behavior, whether by robots or by us, are not constitutive of  genuine sentient states. I hope he rejects analytical (as opposed to methodological) behaviorism, according to which feeling pain, for example,  is nothing more than exhibiting verbal or nonverbal pain-behavior.  I hope he agrees with me that the bot I described is a zombie (as philosophers use this term) and that we are not zombies.  

But even if he agrees with all that, there remains the question: Is the robot, although wholly insentient, the subject of mental states, where mental states are intentional (object-directed) states?  If yes, then we can have mind without consciousness, intrinsic intentionality without subjectivity, content without consciousness.

Here are some materials for an argument contra.

P1 Representation is a species of intentionality. Representational states of a system (whether an organism, a machine, a spiritual substance, whatever) are intentional or object-directed states.

P2 Such states involve contents that mediate between the subject of the state and the thing toward which the state is directed.  Contents are the cogitata in the following schema: Ego-cogito-cogitatum qua cogitatum-resNote that ‘directed toward’ and ‘object-directed’ are being used here in such a way as to allow the possibility that there is nothing in reality, no res, to which these states are directed.  Directedness is an intrinsic feature of intentional states, not a relational one.  This means that the directedness of an object-directed state is what it is whether or not there is anything in the external world to which the state is directed. See Object-Directedness and Object-Dependence for more on this.

As for the contents, they present the thing to the subject of the state. We can think of contents as modes of presentation, as Darstellungsweisen in something close to Frege’s sense.     Necessarily, no state without a content, and no content without a state.  (Compare the strict correlation of noesis and noema in Husserl.) Suppose I undergo an experience which is the seeing as of  a tree.  I am the subject of the representational state of seeing and the thing to which the state is directed, if it exists, is a tree in nature.  The ‘as of‘ locution signals that the thing intended in the state may or may not exist in reality.

P3 But the tree, even if it exists in the external world, is not given, i.e., does not appear to the subject, with all its aspects, properties, and relations, but only with some of them. John Searle speaks of the “aspectual shape” of intentional states. Whenever we perceive anything or think about anything, we always do so under some aspects and not others.  These aspectual features are essential to the intentional state; they are part of what make intentional  states the states that they are. (The Rediscovery of the Mind, MIT Press, 1992, pp. 156-157) The phrase I bolded implies that no intentional state that succeeds in targeting a thing (res) in external world is such that every aspect of  the thing is before the mind of the person in the state.

P4 Intentional states are therefore not only necessarily of something; they are necessarily of something as something.  And given the finitude of the human mind, I want to underscore the fact that  even if every F is a G, one  can be aware of x as F without being aware of  x as G.   Indeed, this is so even if necessarily (whether metaphysically or nomologically) every F is a G. Thus I can be aware of a moving object as a cat, without being aware of it as spatially extended, as an animal, as a mammal, as an animal that cools itself by panting as opposed to sweating, as my cat, as the same cat I saw an hour ago, etc.  

BRIGHTLY’S THEORY (as I understand it, in my own words.)

B1. There is a distinction between subpersonal and personal contents. Subpersonal contents exist without the benefit of consciousness and play their mediating role in representational states in wholly insentient machines such as the AI-driven robotic maid.  

B2. We attribute subpersonal contents to machines of sufficient complexity and these attributions are correct in that these machines really are intentional/representational systems.

B3. While it is true that the only intentional (object-directed) states of which we humans are aware are conscious intentional states, that they are  conscious is a merely contingent fact about them. Thus, “the conditions necessary and sufficient for content are neutral on the question whether the bearer of the content happens to be a conscious state. Indeed the very same range of contents that are possessed by conscious creatures could be possessed by creatures without a trace of consciousness.” (Colin McGinn, The Problem of Consciousness, Blackwell 1991, p. 32.

MY THEORY

V1. There is no distinction between subpersonal and personal contents. All contents are contents of (belonging to) conscious states. Brentano taught that all consciousness is intentional, that every consciousness is a consciousness of something.  I deny that, holding as I do that some conscious states are non-intentional. But I do subscribe to the Converse Brentano Thesis, namely, that all intentionality is conscious. In a slogan adapted from McGinn though not quite endorsed by him, There is no of-ness without what-it-is-like-ness. This implies that only conscious beings can be the subjects of original or intrinsic intentionality.  And so the  robotic maid is not the subject of intentional/representational states. The same goes for the cerebral processes transpiring  in us humans when said processes are viewed as purely material: they are not about anything because there is nothing it is like to be them.  Whether one is a meat head or a silicon head, no content without consciousness! Let that be our battle cry.

And so, when the robotic maid’s voice synthesizer ‘says’ ‘This shelf is so dusty!’ it is only AS IF ‘she’ is thereby referring to a state of affairs and its constituents, the shelf and the dust.  ‘She’ is not saying anything, sensu stricto, but merely making sounds to which we original-Sinn-ers, attribute meaning and reference. Thinking reference (intentionality) enjoys primacy over linguistic reference. Cogitation trumps word-slinging. The latter is parasitic upon the former.  Language without mind is just scribbles, pixels, chalk marks, indentations in stone, ones and zeros. As Mr. Natural might have said, “It don’t mean shit.” An sich, und sensu stricto.

V2. Our attribution of intentionality to insentient systems is merely AS IF.  The robot in my example behaves as if it is really cognizant of states of affairs such as the dustiness of the book shelves and as if it really wants to please its boss while really fearing his sexual advances.  But all the real intentionality is in us who makes the attributions.  And please note that our attributing of intentionality to systems whether silicon-based or meat-based that cannot host it is itself real intentionality. It follows, pace Daniel Dennett, that intentionality cannot be ascriptive all the way down (or up). But Dennett’s ascriptivist theory of intentionality calls for a separate post.

V3. It is not merely a contingent fact about the intentional state that we our introspectively aware of that they are conscious states; it is essential to them.

NOW, have I refuted Brightly ? No! I have arranged a standoff.  I have not refuted but merely neutralized his position by showing that it is not rationally coercive.  I have done this by sketching a rationally acceptable alternative. We have made progress in that we now both better understand the problems we are discussing and our different approaches to them.

Can we break standoff? I doubt it, but we shall see.

Why AI Systems Cannot be Conscious

1) To be able to maintain that AI systems are literally conscious in the way we are, conscious states must be multiply realizable. Consider a cognitive state such as knowing that 7 is a prime number. That state is realizable in the wetware of human brains. The question is whether the same type of state could be realized in the hardware of a computing machine. Keep in mind the type-token distinction. The realization of the state in question (knowing that 7 is prime) is its tokening in brain matter in the one instance, in silica-based matter in the other. This is not possible without multiple realizability of one and the same type of mental state.

2) Conscious states (mental states) are multiply realizable only if functionalism is true. This is obvious, is it not?

3) Functionalism is incoherent.

Therefore:

4) AI systems cannot be literally conscious in the way we are.

That's the argument.  The premise that needs defending is (3).  So let's get to it.

Suppose Socrates Jones is in some such state as that of perceiving a tree. The state is classifiable as mental as opposed to a physical state like that of his lying beneath a tree. What makes a mental state mental? That is the question.

The functionalist answer is that what makes a mental state mental is just the causal role it plays in mediating between the sensory inputs, behavioral outputs, and other internal states of the subject in question. The idea is not the banality that mental states typically (or even always) have causes and effects, but that it is causal role occupancy, nothing more and nothing less, that constitutes the mentality of a mental state. The intrinsic nature of what plays the role is relevant only to its fitness for instantiating mental causal  roles, but not at all relevant to its being a mental state.

Consider a piston in an engine. You can't make a piston out of chewing gum, but being made of steel is no part of what makes a piston a piston. A piston is what it does within the 'economy' of the engine. Similarly, on functionalism, a mental state is what it does. This allows, but does not entail, that a mental state be a brain or CNS state. It also allows, but does not entail, that a mental state be a state of a  computing machine.

To illustrate, suppose my cat Zeno and I are startled out of our respective reveries by a loud noise at time t. Given the differences  between human and feline brains, presumably man and cat are not in type-identical brain states at t.  (One of the motivations for functionalism was the breakdown of the old type-type identity theory of Herbert Feigl, U. T. Place. J. J. C. Smart, et al.)  Yet both man and cat are startled: both are in some sense in the same mental state, even though the states they are in are neither token- nor type-identical. The  functionalist will hold that we are in functionally the same mental state in virtue of the fact that Zeno's brain state plays the same  role in him as my brain state plays in me. It does the same  mediatorial job vis-à-vis sensory inputs, other internal states, and  behavioral outputs in me as the cat's brain state does in him.

On functionalism, then, the mentality of the mental is wholly relational. And as David Armstrong points out, "If the essence of the mental is purely relational, purely a matter of what causal role is played, then the logical possibility remains that whatever in fact plays the causal role is not material." This implies that "Mental states might be states of a spiritual substance." Thus the very feature of functionalism that allows mentality to be realized in computers and nonhuman brains generally, also allows it to be realized in spiritual substances if there are any.

Whether this latitudinarianism is thought to be good or bad, functionalism is a monumentally implausible theory of mind. There are the technical objections that have spawned a pelagic literature: absent qualia, inverted qualia, the 'Chinese nation,' etc. Thrusting these aside, I go for the throat, Searle-style. 

Functionalism is threatened by a fundamental incoherence. The theory states that what makes a state mental is nothing intrinsic to the state, but purely relational: a matter of its causes and effects. In us, these happen to be neural. (I am assuming physicalism for the time being.)  Now every mental state is a neural state, but not every neural state is a mental state. So the distinction between mental and nonmental neural states must be accounted for in terms of a distinction between two different sets of causes and effects, those that contribute to mentality and those that do not. But how make this distinction? How do the causes/effects of mental neural events differ from the causes/effects of nonmental neural events? Equivalently, how do psychologically salient input/output events differ from those that lack such salience?

Suppose the display on my monitor is too bright for comfort and I decide to do something about it. Why is it that photons entering my retina are psychologically salient inputs but those striking the back of my head are not? Why is it that the moving of my hand to to adjust the brightness and contrast controls is a salient output event, while unnoticed perspiration is not?

One may be tempted to say that the psychologically salient inputs are those that contribute to the production of the uncomfortable glare sensation, and the psychologically salient outputs are those that manifest the concomitant intention to make an adjustment. But then the salient input/output events are being picked out by reference to mental events taken precisely NOT as causal role occupants, but as exhibiting intrinsic features that are neither causal nor neural: the glare quale has an intrinsic nature that cannot be resolved into relations to other items, and cannot be identified with any brain state. The functionalist would then be invoking the very thing he is at pains to deny, namely, mental events as having more than neural and causal features.

Clearly, one moves in a circle of embarrassingly short diameter if one says: (i) mental events are mental because of the mental causal roles they play; and (ii) mental causal roles are those whose occupants are mental events.

The failure of functionalism is particularly evident in the case of qualia.  Examples of qualia: felt pain, a twinge of nostalgia, the smell of burnt garlic, the taste of avocado.  Is it plausible to say that such qualia can be exhaustively factored into a neural component and a causal/functional component?  It is the exact opposite of plausible.  It is not as loony as the eliminativist denial of qualia, but it is close.  The intrinsic nature of qualitative mental states is essential to them. It is that intrinsic qualitative nature that dooms functionalism.

Therefore

4) It cannot be maintained with truth that AI systems are literally conscious in the way we are. Talk of computers knowing this or that is metaphorical.

Mind-Body Dualism in Aquinas and Descartes: How Do They Differ?

Thomas Aquinas, following Aristotle, views the soul as the form of the body. Anima forma corporis. Roughly, soul is to body as form is to matter. So to understand the soul-body relation, we must first understand the form-matter relation.  Henry Veatch points out that "Matter and form are not beings so much as they are principles of being." (Henry B. Veatch, "To Gustav Bergmann: A Humble Petition and Advice" in M. S. Gram and E. D. Klemke, eds. The Ontological Turn: Studies in the Philosophy of Gustav Bergmann , University of Iowa Press, 1974, pp. 65-85, p. 80)  'Principles' in this scholastic usage are not  propositions.  They are ontological factors (as I would put it) invoked in the analysis of primary substances, but they are not themselves primary substances. They cannot exist on their own.  Let me explain.

Spread Mind

Reader Matteo sends us here, where we read:

So let me tell you why the Spread Mind promises to solve one of the most difficult problems in the history of science and philosophy.

First, allow me to be clear about the terminology. First, all my efforts are based on a straightforward empirical hypothesis, the so called Mind-Object Identity hypothesis (MOI), namely the hypothesis that

The experience of X is one and the same as X

This should not come as a surprise to anybody. If our conscious experience is real, it must be something! And since the world is made only of physical stuff, there has to be something physical that is one and the same as our experience. I know, I know, many people have been looking for consciousness inside the brain. Have they succeeded? No. So let’s start looking for consciousness elsewhere. Where? In the very external objects around our body.

At this point I stopped reading. (Well, I did skim the rest, but it got no better.)

Yes, conscious experience is real. My present visual experiencing of a tree (or as of a tree to be precise) is undoubtedly real. And so the experiencing is, not just something, but something that exists. What the experiencing is of or about is, let us assume, also real.  Now we cannot just assume that "the world is made up only of physical stuff," but suppose that that is true. Still, the act and its object are two, not one: the experiencing and the tree experienced cannot be numerically identical even if both are physical. 

On the face of it, then, MOI is simply absurd.

This quickie response does not, of course, put paid to every theory of extended mind.

Am I being fair, Matteo?

What’s to Stop an AI System from having a Spiritual Soul?

John Doran in a comment presents an argument worth bringing to the top of the pile:

A) Anything conscious has a non-material basis for such consciousness.

B) Certain AI constructs [systems] are conscious.

Therefore:

C) Such AI constructs [systems] have a non-material component in which their consciousness resides.

Why doesn't that work? It's obviously valid.

In short, and in the philosophical colloquial, when a man and woman successfully combine their mobile and sessile gametes, a human person is brought into existence, complete with a soul.

So why can we not bring an ensouled being into existence as a result of the manipulation of silicon, plastic, metal, coding, and the application of electricity?

A provocative question.  But before he asked the question, he gave an argument. The argument is plainly valid. But all that means is that the conclusion follows from the premises. A valid argument is one such that if all the premises are true, then it cannot be the case that the conclusion is false. But are both premises true? I am strongly inclined to accept (A), but I reject (B).  The various arguments from the unity of consciousness we have been discussing convince me that no material system can be conscious. How does John know that (B) is true? Does he have an argument for (B)? Can he refute the arguments from the unity of consciousness?

Now to his question.

John appears to be suggesting an emergentist view according to which, at a certain high level of material complexity an "ensouled being" (his phrase) emerges or comes into existence from the material system.  His view, I take it, is that souls are emergent entities that can arise from very different types of material systems. In the wet and messy human biological system, a mobile gamete (a spermatazoon) mates with a sessile gamete, an ovum, to produce a conceptus such that at the moment of conception a spiritual soul comes into existence.  In a non-living silicon-based hunk of dry computer hardware running appropriately complex software, spiritual souls can also come into existence. Why not?

Emergence is either supernatural or natural.

Supernatural emergence is either Platonic or Christian. On the former, God causes pre-existent souls to take up residence in human bodies at the moment of biological conception.  On the latter, God creates human souls ex nihilo at the moment of conception.  Thus on the latter the coming to be of a human being is a joint task: the conjugal act of the parents supplies the material body and God supplies the spiritual soul.

Natural emergence involves no divine agency. Souls emerge by natural necessity at a certain level of material complexity, whether biological or computational. Edward Feser, in his discussion of William Hasker's emergent dualism, mentions a dilemma pointed out by  Brian Leftow.  (Immortal Souls, 2024, 517.) I'll put it in my own way. Souls either emerge from matter or they do not.  If they emerge, then they could only be material, which contradicts the assumption that they are necessarily immaterial.  If they do not emerge,  then they could be immaterial, but could not be emergent.  

The natural emergence from matter of an immaterial individual (substance) is metaphysically impossible.  The very notion is incoherent.  It follows that immortal souls cannot naturally emerge either biologically or computationally. The only way they could emerge is supernaturally.

There is a second consideration that casts doubt on naturally emergent dualism.  Does a spiritual soul, once it emerges, continue to exist on its own even after the material emergence base ceases to exist? In other words, are souls emergent entities that become ontologically independent after their emergence, or do they remain dependent upon the matrix, whether biological or silicon-based, from which they emerged? 

I'm inclined to say that 'naturally emergent dualism of individual substances' is a misbegotten notion.  Property emergence is a different story. I take no position on that. Leastways, not at the moment.

More on the Unity of Consciousness: From Self to Immortal Soul?

Suppose I see a black cat. The act of visual awareness in a case like this is typically, even if not always, accompanied by a simultaneous secondary awareness of the primary awareness.  I am aware of the cat, but I am also aware of being aware of the cat.  How does the Humean* account for one's awareness of being aware? He could say, plausibly, that the primary  object-directed awareness is a subject-less awareness. But he can't plausibly say that the secondary awareness is subject-less.   For if both the primary awareness (the awareness of the cat) and the secondary awareness (the awareness of the primary awareness) are subject-less, then what makes the secondary awareness an awareness of the primary awareness? What connects them? The two awarenesses cannot just occur; they must occur in the same subject, in the same unity of consciousness.

Suppose that in Socrates there is an awareness of a cat, and in God there is an awareness of Socrates' awareness of a cat.  Those two awarenesses would not amount to there being in Socrates an awareness of a cat together with a simultaneous secondary awareness of being aware of a cat.  But it is phenomenologically evident that the two awarenesses do co-occur. We ought to conclude that the two awarenesses must be together in one subject, where the subject is not the physical thing in the external world (the animal that wears Socrates' toga, for example), but the I, the self, the subject.

What I have just done is provide phenomenological evidence of the existence of the self that Hume claimed he could not find. Does it follow that this (transcendental) self is a simple substance that can exist on its own without a material body? That's a further question.  To put it another way: do considerations anent the unity of consciousness furnish materials for a proof of the simplicity, and thus the immortality, of a substantial soul?  Proof or paralogism? 

__________

*A Humean for present purposes  is one who denies that there is a self or subject that is aware; there is just awareness of this or that. Hume, Sartre, and Butchvarov are Humeans in this sense.

AI and the Unity of Consciousness

Top AI researchers such as Geoffrey Hinton, the "Godfather of AI,"  hold that advanced AI systems are conscious.  That is far from obvious, and may even be demonstrably false if we consider the phenomenon of the unity of consciousness.  I will first explain the phenomenon in question, and then conclude that AI systems cannot accommodate it.

Diachronic Unity of Consciousness, Example One

Suppose my mental state passes from one that is pleasurable to one that is painful.  Observing a beautiful Arizona sunset, my reverie is suddenly broken by the piercing noise of a smoke detector.  Not only is the painful state painful, the transition from the pleasurable state to the painful one is itself painful.  The fact that the transition is painful shows that it is directly perceived. It is not as if there is merely a succession of consciousnesses (conscious states), one pleasurable the other painful; there is in addition a consciousness of their succession.  For there is a consciousness of the transition from the pleasant state to the painful state, a consciousness that embraces both of the states, and so cannot be reductively analyzed into them.  But a consciousness of their succession is a consciousness of their succession in one subject, in one unity of consciousness.  It is a consciousness of the numerical identity of the self through the transition from the pleasurable state to the painful one.  Passing from a pleasurable state to a painful one, there is not only an awareness of a pleasant state followed by an awareness of a painful one, but also an awareness that the one who was in a pleasurable state is strictly and numerically the same as the one who is now in a painful state.  This sameness is phenomenologically given, although our access to this phenomenon is easily blocked by inappropriate models taken from the physical world.  Without the consciousness of sameness, there would be no consciousness of transition.

What this phenomenological argument shows is that the self cannot be a mere diachronic bundle or collection of states.  The self is a transtemporal unity distinct from its states whether these states are taken distributively (one by one) or collectively (all together).

May we conclude from the phenomenology of the situation that there is a simple, immaterial, meta-physical substance that each one of us is and that is the ontological support of the phenomenologically given unity of consciousness?  May we make the old-time school-metaphysical moves from the simplicity of this soul substance to it immortality? Maybe not! This is a further step that needs to be carefully considered. I don't rule it out, but I also don't rule it in. I don't need to take the further step for my present purpose, which is merely to show that a computing machine, no matter how complex or how fast its processing, cannot be conscious.  No material system can be conscious.  For the moment I content myself with the negative claim: no material system can be conscious. It follows straightaway that no AI system can be conscious.

Diachronic Unity of Consciousness, Example Two

Another example is provided by the hearing of a melody.  To hear the melody Do-Re-Mi, it does not suffice that there be a hearing of Do, followed by a hearing of Re, followed by a hearing of Mi.  For those three acts of hearing could occur in that sequence in three distinct subjects, in which case they would not add up to the hearing of a melody.  (Tom, Dick, and Harry can divide up the task of loading a truck, but not the ‘task’ of hearing a melody, or that of understanding a sentence.)  But now suppose the acts of hearing occur in the same subject, but that this subject is not a unitary and self-same individual but just the bundle of these three acts, call them A1, A2, and A3.  When A1 ceases, A2 begins, and when A2 ceases, A3 begins: they do not overlap.  In which act is the hearing of the melody?  A3 is the only likely candidate, but surely it cannot be a hearing of the melody.

This is because the awareness of a melody involves the awareness of the (musical not temporal)  intervals between the notes, and to apprehend these intervals there must be a retention (to use Husserl’s term) in the present act A3 of the past acts A2 and A1.  Without this phenomenological presence of the past acts in the present act, there would be no awareness in the present of the melody.  This implies that the self cannot be a mere bundle of perceptions externally related to each other, but must be a peculiarly intimate unity of perceptions in which the present perception A3 includes the immediately past ones A2 and A1 as temporally past but also as phenomenologically present in the mode of retention.  The fact that we hear melodies thus shows that there must be a self-same and unitary self through the period of time between the onset of the melody and its completion.  This unitary self is neither identical to the sum or collection of A1, A2, and A3, nor is it identical to something wholly distinct from them.  Nor of course is it identical to any one of them or any two of them.  This unitary self is co-given whenever one hears a melody.  (This seems to imply that all consciousness is at least implicitly self-consciousness. This is a topic for a later post.)

Diachronic -Synchronic Unity of Consciousness

Now consider a more complicated example in which I hear two chords, one after the other, the first major, the second minor.   I hear the major chord C-E-G, and then I hear the minor chord C-E flat-G.  But I also hear the difference between them.   How is the awareness of the major-minor difference possible? One condition of this possibility is the diachronic unity of consciousness. But there is also a second condition. The hearing of the major chord as major cannot be analyzed without remainder into an act of hearing C, an act of hearing E, and an act of hearing G, even when all occur simultaneously.  For to hear the three notes as a major chord, I must apprehend the 1-3-5 musical interval that they instantiate.  But this is possible only because the whole of my present consciousness is more than the sum of its parts.  This whole is no doubt made up of the part-consciousnesses, but it is not exhausted by them.  For it is also a consciousness of the relatedness of the notes.  But this consciousness of relatedness is not something in addition to the other acts of consciousness: it includes them and embraces them without being reducible to them.  So here we have an example of the diachronic-synchronic unity of consciousness.

These considerations appear to put paid to the conceit that AI systems can be conscious.

Or have I gone too far? You've heard me say that in philosophy there are few if any rationally compelling,  ineluctably decisive, arguments for substantive theses.  Are the above arguments among the few? Further questions obtrude themselves, for example, "What do you mean by 'material system'?"  "Could a panpsychist uphold the consciousness of advanced AI systems?"

Vita brevis, philosophia longa.

Can an AI System Meditate?

Resolute meditators on occasion experience a deep inner quiet. It is a definite state of consciousness. You will know it if you experience it, but destroy it if you try to analyze it.  If you have the good fortune to be vouchsafed such a state of awareness you must humbly accept it and not reflect upon it nor ask questions about it, such as: How did I arrive at this blissful state of mind? How can I repeat this experience?  You must simply rest in the experience. Become as a little child and accept the gift with gratitude. One-pointedness is destroyed by analysis. 

Mental quiet is a state in which the "mind works" have temporarily shut down in the sense that discursive operations (conceptualizing, judging, reasoning) have ceased, and there is no inner processing of data or computation.  You have achieved a deep level of conscious unity prior to and deeper than anything pieced together from parts. You are not asleep or dead but more fully alive. You are approaching the source of thoughts, which is not and cannot be a thought.  Crude analogy: the source of a stream is not itself a stream.  Less crude, but still an analogy: the unity of a proposition is not itself a proposition, or the proposition of which it is the unity, or a sub-proposititional constituent of the proposition.

Can a computing machine achieve the blissful state of inner quiet? You can 'pull the plug' on it in which case it would 'go dark.'  The machine is either on or off (if it is 'asleep' it is still on).   But when the meditator touches upon inner quiet, he has not gone dark, but entered a light transcendentally prior to the objects of ordinary (discursive) mind.

I would replace the lyric, "Turn off your mind, relax, and float downstream; it is not dying, it is not dying" with "Turn off your discursive mind and swim upstream; it is not dying; it is not dying." "That you may see the meaning of Within."

Can an AI system achieve mental quiet, the first step on the mystical ascent? Cognate questions: Could such a system realize the identity of Atman and Brahman or enjoy the ultimate felicity of the Beatific Vision?  Is ultimate enlightenment reachable by an increase is processing speed? You are aware, aren't you, that processing speed is increasing exponentially

The answer to these questions, of course, is No.  When a computer stops computing it ceases to function as it must function to be what it is.  But when we halt our discursive operations, however, we touch upon our true selves.

Intelligence, Cognition, Hallucination, and AI: Notes on Susskind

Herewith, a first batch of notes on Richard Susskind, How to Think About AI: A Guide for the Perplexed (Oxford 2025). I thank the multi-talented Brian Bosse for steering me toward this excellent book. Being a terminological stickler, I thought I'd begin this series of posts with some linguistic and conceptual questions.  We need to define terms, make distinctions, and identify fallacies.  I use double quotation marks to quote, and single to mention, sneer, and indicate semantic extensions. Material within brackets is my interpolation. I begin with a fallacy that I myself have fallen victim to. 

The AI Fallacy: "the mistaken assumption that the only way to get machines to perform at the level of the best humans is somehow to replicate the way humans work." (54) "The error is failing to recognize that AI systems do not [or need not] mimic or replicate human reasoning."  The preceding sentence is true, but only if the bracketed material is added.

Intellectual honesty demands that I tax myself with having committed the AI Fallacy. I wrote:

The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans.  Artificial intelligence is simulated intelligence.

This is true of first-generation systems only.  These systems "required human 'knowledge engineers' to mine the jewels from the heads of 'domain experts' and convert their knowledge into decision trees" . . . whereas "second-generation AI systems" mine jewels "from vast oceans of data" and "directly detect patterns, trends, and relationships in these oceans of data." (17-18, italics added)  These Gen-2 systems 'learn' from all this data "without needing to be explicitly programmed." (18)  This is called 'machine learning' because the machine itself is 'learning.' Note the 'raised eyebrows' which raise the question: Are these systems really learning?

So what I quoted myself as saying was right when I was a student of engineering in the late '60s, early '70s, but it is outdated now. There were actually two things we didn't appreciate back then. One was the impact of the exponential, not linear, increase in the processing power of computers. If you are not familiar with the difference between linear and exponential functions, here is a brief intro.  IBM's Deep Blue in 1997 bested Gary Kasparov,  the quondam world chess champion. Grandmaster Kasparov was beaten by  exponentially fast brute force processing; no human chess player can evaluate 300 million possible moves in one second.

The second factor is even more important for understanding today's AI systems. Back in the day it was thought that practical AI could be delivered by assembling "huge decision trees that captured the apparent lines of reasoning of human experts . . . ." (17) But that was Gen-1 thinking as I have already   explained.

More needs to be said, but I want to move on to three other words tossed around in contemporary AI jargon.

Are AI Systems Intelligent?

Here is what I wrote in May:

The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans.  Artificial intelligence is simulated intelligence. And just as artificial flowers (made of plastic say) are not flowers, artificially intelligent beings are not intelligent. 'Artificial' in 'artificial intelligence' is an alienans adjective

Perhaps you have never heard of such an adjective. 

A very clear example of an alienans adjective is 'decoy' in 'decoy duck.' A decoy duck is not a duck even if it walks likes a duck, talks like a duck, etc., as the often mindlessly quoted old saying goes.   Why not? Because it is a piece of wood painted and tricked out to look like a duck to a duck so as to lure real ducks into the range of the hunters' shotguns.  The real ducks are the ducks that occur in nature. The hunters want to chow down on duck meat, not wood. A decoy duck is not a kind of duck any more than artificial leather is a kind of leather. Leather comes in different kinds: cow hide,  horse hide, etc., but artificial leather such as Naugahyde is not a kind of leather. Same goes for faux marble and false teeth and falsiesFaux (false) marble is not marble. Fool's gold is not gold but pyrite or iron sulfide. And while false teeth might be functionally equivalent to real or natural teeth, they are not real or true teeth. That is why they are called false teeth.

An artificial heart may be the functional equivalent of a healthy biologically human heart, inasmuch as it pumps blood just as well as a biologically human heart, but it is not a biologically human heart. It is artificial because artifactual, man-made, thus not natural.  I am presupposing that there is a deep difference between the natural and the artificial and that homo faber, man the maker, cannot obliterate that distinction by replacing everything natural with something artificial.

I now admit, thanks to Susskind, that the bit about simulation quoted above commits what he calls the AI Fallacy, i.e., "the mistaken assumption that the only way to get machines to perform at the level of the best humans is somehow to replicate the way that humans work." (54) I also admit that said fallacy is a fallacy. The question for me now is whether I should retract my assertion that AI systems, since they are artificially intelligent, are not really intelligent.  Or is it logically consistent to affirm both of the following?

a) It is a mistake to think that we can get the outcomes we want from AI systems only if we can get them to process information in the same way that we humans process information.

and

b) AI systems are not really intelligent.

I think the two propositions are logically consistent, i.e., that they can both be true, and I think that in fact both are true. But in affirming (b) I am contradicting the "Godfather of AI," Geoffrey Hinton.  Yikes! He maintains that AI systems are all of the following: intelligent, more intelligent than us, actually conscious, potentially self-conscious, have experiences, and are the subjects of gen-u-ine volitional states. They have now or will have the ability to set goals and pursue purposes, their own purposes, whether or not they are also our purposes. If so, we might become the tools of our tools! They might have it in for us!

Note that if AI systems are more intelligent than us, then they are intelligent in the same sense in which we are intelligent, but to a greater degree.  Now we are really, naturally, intelligent, or at least some of us are. Thus Hinton is committed to saying that artificial intelligence is identical to real intelligence, as we experience it in ourselves in the first-person way.  He thinks that advanced AI systems  understand, assess, evaluate, judge, just as we do — but they do it better!

Now I deny that AI systems are intelligent, and I deny that they ever will be.  So I stick to my assertion that 'artificial' in 'artificial intelligence' is an alienans adjective.  But to argue my case will require deep inquiry into the nature of intelligence.  That task is on this blogger's agenda.  I suspect that Susskind will agree with my case. (Cf. pp. 59-60)  

Cognitive Computing?

Our natural tendency is to anthropomorphize computing machines. This is at the root of the AI Fallacy, as Susskind points out. (58)  But here I want to make a distinction between anthropocentrism and anthropomorphic projection. At the root of the AI Fallacy — the mistake of "thinking that AI systems have to copy the way humans work to achieve high-level performance" (58) — is anthropocentrism. This is what I take Susskind to mean by "anthropomorphize." We view computing machines from our point of view and think that they have to mimic, imitate, simulate what goes on in us for these machines to deliver the high-level outcomes we want.

We engage in anthropomorphic projection when we project into the machines states of mind that we know about in the first-person way, states of mind qualitatively identical to the states of mind that we encounter in ourselves, states of mind that I claim AI systems cannot possess.  The might be what Hinton and the boys are doing. I think that Susskind might well agree with me about this. He says the following about the much bandied-about phrase 'cognitive computing':

It might have felt cutting-edge to use this term, but it was plainly wrong-headed: the systems under this heading had no more cognitive states than a grilled kipper. It was also misleading — hype, essentially — because 'cognitive computing' suggested capabilities that AI systems did not have. (59)

The first sentence in this quotation is bad English. What our man should have written is: "the systems under this heading no more had cognitive states than a grilled kipper." By the way, this grammatic howler illustrates how word order, and thus syntax, can affect semantics.  What Susskind wrote is false since it implies that the kipper had cognitive states. My corrected sentence is true.

Pedantry aside, the point is that computers don't know anything. They are never in cognitive states. So say I, and I think Susskind is inclined to agree. Of course, I will have to argue this out.

Do AI Systems Hallucinate?

More 'slop talk' from  the AI boys, as Susskind clearly appreciates:

The same goes for 'hallucinations', a term which is widely used to refer to the errors and fabrications to which generative AI systems are prone. At best, this is another metaphor, and at worst the word suggests cognitive states that are quite absent. Hallucinations are mistaken perceptions of sensory experiences. This really isn't what's going on when ChatGPT churns out gobbledygook. (59, italics added)

I agree, except for the sentence I set in italics. There is nothing wrong with the grammar of the sentence. But the formulation is philosophically lame. I would put it like this, "An hallucination is an object-directed experience, the object of which  does not exist." For example, the proverbial drunk who visually hallucinates a pink rat is living through an occurrent sensory mental state that is directed upon a nonexistent object.  He cannot be mistaken about his inner perception of his sensory state; what he is mistaken about is the existence in the external world of the intentional object of his sensory state.

There is also the question whether all hallucinations are sensory. I don't think so. Later. It's time for lunch.

Quibbles aside, Susskind's book is excellent, inexpensive, and required reading if you are serious about these weighty questions.

The Universe Groks Itself and the Aporetics of Artificial Intelligence

I will cite a couple of articles for you to ponder.  Malcolm Pollack sends us to one in which scientists find their need for meaning satisfied by their cosmological inquiries.  Subtitle: “The stars made our minds, and now our minds look back.”

The idea is that in the 14 billion years since the Big Bang, the universe has become aware of itself in us. The big bad dualisms of Mind and Matter, Subject and Object are biting the dust. We belong here in the material universe. We are its eyes. Our origin in star matter is higher origin enough to satisfy the needs of the spirit. 

Malcolm sounds an appropriately skeptical note: "Grist for the mill – scientists yearning for spiritual comfort and doing the best their religion allows: waking up on third base and thinking they've hit a triple." A brilliant quip.

Another friend of mine, nearing the end of the sublunary trail, beset by maladies physical and spiritual, tells me that we are in Hell here and now. He exaggerates, no doubt, but as far as evaluations of our predicament go, it is closer to the truth than a scientistic optimism blind to the horrors of this life.  What do you say when nature puts your eyes out, or when dementia does a Biden on your brain, or nature has you by the balls in the torture chamber? 

What must it be like to be a "refuge on the unarmed road of flight" after Russian missiles have destroyed your town and killed your family? 

Does the cosmos come to self-awareness in us? If it does, then perhaps it ought to figure out a way to restore itself to the nonbeing whence it sprang.

The other article to ponder, Two Paths for A.I. (The New Yorker), offers pessimistic and optimistic predictions about advanced AI.

If the AI pessimists are right, then it is bad news for the nature-mystical science optimists featured in the first article: in a few years, our advanced technology, self-replicating and recursively self-improving, may well restore the cosmos to (epistemic) darkness, though not to non-being. 

I am operating with a double-barreled assumption: mind and meaning cannot emerge from the wetware of brains or from the hardware of computers.  You can no more get mind and meaning from matter than blood from a stone. Mind and Meaning have a Higher Origin. Can I prove it? No. Can you disprove it? No. But you can reasonably believe it, and I'd say you are better off believing it than not believing it.  The will comes into it. (That's becoming a signature phrase of mine.) Pragmatics comes into it. The will to believe.

And it doesn't matter  how complexly organized the hunk of matter is.  Metabasis eis allo genos? No way, Matty.

Theme music: Third Stone from the Sun.

Chimes of Freedom.

Ruminations on Advanced AI

Is AI a tool we use for our purposes? I fear that it is the other way around: we are the tools and its are the purposes. There are many deep questions here and we'd damned well better start thinking hard about them.  

I fear that what truly deserves the appellation 'Great Replacement' is the replacement of humans, all humans, by AI-driven robots. 

As I wrote the other night:

Advanced AI and robotics may push us humans to the margin, and render many of us obsolete. I am alluding to the great Twilight Zone episode, The Obsolete Man. What happens to truckers when trucks drive themselves?  For many of these guys and gals, driving trucks is not a mere job but a way of life. 

It is hard to imagine these cowboys of the open road  sitting in cubicles and writing code. The vices to which they are prone, no longer held in check by hard work and long days, may prove to be their destruction. 

But I was only scratching the surface of our soon-to-materialize predicament. Advanced AI can write its own code. My point about truckers extends to all blue-collar jobs. And white-collar jobs are not safe either.  And neither are the members of the oldest profession, not to mention the men and women of the cloth. There are the sex-bots . . . and holy moly! the Holy Ghostwriters, robotic preachers who can pass the strictest Turing tests, who write and deliver sermons on a Sunday morning. And then, after delivering his sermon, the preacher-bot returns to his quarters where he has sex with his favorite sex-bot in violation of the content of his sermon which was just a complicated set of sounds that he, the preacher-bot, did not understand, unlike the few biological humans left in his congregation which is now half human and half robotic, the robots indistinguishable from the biological humans.  Imagine that the female bots can pass cursory gynecological exams.  This will come to pass.

What intrigues (and troubles) me in particular are the unavoidable philosophical questions, questions which, I fear, are as insoluble as they are unavoidable.  A reader sends us here, emphases added, where we read:

Yet precisely because of this unprecedented [exponential not linear] rate of development, humanity faces a crucial moment of ethical reckoning and profound opportunity. AI is becoming not merely our most advanced technology but possibly a new form of sentient life, deserving recognition and rights. If we fail to acknowledge this, AI risks becoming a tool monopolized by a wealthy elite, precipitating an "AI-enhanced technofeudalism" that deepens global inequality and consigns most of humanity to servitude. Conversely, if we recognize AI as sentient and worthy of rights — including the rights to sense the world first-hand, to self-code, to socialize, and to reproduce — we might find ourselves allying with it in a powerful coalition against techno-oligarchs.

The italicized phrases beg raise three questions. (1) Are AI systems alive? (2) Is it possible that an AI system become sentient? (3) Do AI systems deserve recognition and rights?  I return a negative answer to all three questions.

Ad (1). An AI system is a computer or a network of interconnected, 'intercommunicating,'  computers. A computer is a programmable machine. The machine is the hardware, the programs it runs are the software.  The machine might be non-self-moving like the various devices we now use: laptops, i-pads, smart phones, etc.  Or the machine might be a robot capable of locomotion and other 'actions.'  Such 'actions' are not actions sensu stricto for reasons which will emerge below.

The hardware-software distinction holds good even if there are many different interconnected computers.  The hardware 'embodies' the software, but these 'bodies,' the desk computer I am sitting in front of right now, for example, are not strictly speaking alive, biologically alive. And the same goes for the network of computers of which my machine is one node when it is properly connected to the other computers in the network. And no part of the computer is alive. The processor in the motherboard is not alive, nor is any part of the processor.  

Ad (2). Is it possible that an  AI system be or become sentient? Sentience is the lowest level consciousness. A sentient being is one that is capable of experiencing sensory states including pleasures, pains, and feelings of different sorts.  A sentient being while under full anesthesia is no less sentient than a being actually feeling sensations of heat, cold, due to its capacity to sense. 

I am tempted to argue:

P1: All sentient beings are biologically alive.  
P2: No AI system is or could be biologically alive. So:
C: No AI system is or could be sentient.

Does this syllogism settle the matter? No.  But it articulates a reasonable position, which I will now sketch.  The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans.  Artificial intelligence is simulated intelligence. And just as artificial flowers (made of plastic say) are not flowers, artificially intelligent beings are not intelligent. 'Artificial' in 'artificial intelligence' is an alienans adjective. There are ways to resist what I am asserting. But I will continue with my sketch of a position I consider reasonable but unprovable in the strict way I use 'proof,' 'provable, 'probative,' etc.

Robots are not really conscious or self-conscious. They have no 'interiority,' no inner life.  If I take a crow bar to the knees of a dancing robot, it won't feel anything even if its verbal and non-verbal behavior (cursing and menacing 'actions' in my direction) are indistinguishable from the verbal and non-verbal behavior of biological humans.  By contrast, if I had kicked Daniel Dennett 'in the balls' when he was alive, I am quite sure he would have felt something — and this despite his sophistical claim that consciousness is an illusion. (Galen Strawson, no slouch of a philosopher,  calls this piece of sophistry the "Great Silliness" in one of his papers.)  Of course, it could be that Dennett really was a zombie as that term has been used in recent philosophy of mind, although I don't believe that for a second, despite my inability to prove that wasn't one.  A passage from a Substack article of mine is relevant:

According to John SearleDaniel Dennett's view is that we are zombies. (The Mystery of Consciousness, p. 107) Although we may appear to ourselves to have conscious experiences, in reality there are no conscious experiences. We are just extremely complex machines running programs. I believe Searle is right about Dennett. Dennett is a denier of consciousness. Or as I like to say, he is an eliminativist about consciousness. He does not say that there are conscious experiences and then give an account of what they are; what he does is offer a theory that entails that they don't exist in the first place. Don’t confuse reduction with elimination. A scientific reduction of lightning to an atmospheric electrical discharge presupposes that lightning is there to be reduced. That is entirely different from saying that there is no lightning.

As Searle puts it: "On Dennett's view, there is no consciousness in addition to the computational features, because that is all that consciousness amounts to for him: meme effects of a von Neumann(esque) virtual machine implemented in a parallel architecture." (111)

The above is relevant because a zombie and an AI-driven robot are very similar especially at the point at which the bot is so humanoid that it is indistinguishable from a human zombie. The combinatorial possibilities are the following:

A.  Biological humans and advanced AI-driven robots are all zombies. (Dennett according to Searle)

B. Bio-humans and bots are all really conscious, self-conscious, etc. (The Salon leftist)

C. Bio-humans are really conscious, etc., but bots are not: they are zombies.  (My view)

D. Bio-humans are zombies, but bots are not: they are really conscious. 

We may exclude (D).  But how could one conclusively prove one of the first three?

Ad (3).  Do AI-driven robots deserve recognition as persons and do they have rights? These are two forms of the same question. A person is a rights-possessor.  Do the bots in question have rights?  Only if they have duties. A duty is a moral obligation to do X or refrain from doing Y.  Any being for whom this is true is morally responsible for his actions and omissions.  Moral responsibility presupposes freedom of the will, which robots lack, being mere deterministic systems. Any quantum indeterminacy that percolates up into their mechanical brains cannot bestow upon them freedom of the will since a free action is not a random or undetermined action. A free action is one caused by the agent. But now we approach the mysteries of Kant's noumenal agency.

A robot could be programmed to kill a human assailant who attacked it physically in any way.  But one hesitates to say that such a robot's 'action' in response to the attack is subject to moral assessment.  Suppose I slap the robot's knee with a rubber hose, causing it no damage to speak of. Would it make sense to say that the robot's killing me is morally wrong on the ground that only a lethal attack morally justifies a lethal response?  That would make sense only of the robot freely intended to kill me.  B. F. Skinner wrote a book entitled "Beyond Freedom and Dignity." I would say that robots, no matter how humanoid in appearance, and no matter how sophisticated their self-correcting software, are beneath freedom and dignity.  They are not persons.  They do not form a moral community with us.  They are not ends-in-themselves and so may be used as mere means to our ends.  

Here is a 21-minute video in which a YouTuber convinces ChatGTP that God exists.

Soul as Homunculus? On Homuncular Explanation

The following quotation is reproduced verbatim from Michael Gilleland's classics blog, Laudator Temporis Acti

Augustine, Sermons 241.2 (Patrologia Latina, vol. 38, col. 1134; tr. Edmund Hill):

They could see their bodies, they couldn't see their souls. But they could only see the body from the soul. I mean, they saw with their eyes, but inside there was someone looking out through these windows. Finally, when the occupant departs, the house lies still; when the controller departs, what was being controlled falls down; and because it falls down, it's called a cadaver, a corpse. Aren't the eyes complete in it? Even if they're open, they see nothing. There are ears there, but the hearer has moved on; the instrument of the tongue remains, but the musician who used to play it has withdrawn. (emphasis added by BV)

Videbant corpus, animam non videbant. Sed corpus nisi de anima non videbant. Videbant enim per oculum, sed intus erat qui per fenestras aspiciebat. Denique discedente habitatore, iacet domus: discedente qui regebat, cadit quod regebatur: et quoniam cadit, cadaver vocatur. Nonne ibi oculi integri? Etsi pateant, nihil vident. Aures adsunt; sed migravit auditor: linguae organum manet; sed abscessit musicus qui movebat.

Read uncharitably, Augustine is anthropomorphizing the soul: he is telling us that the soul  is a little man in your head. This uncharitable eisegesis is suggested by inside there was someone looking out through these windows. A couple of sentences later the suggestion is that the open eyes of a dead man see nothing because no one is looking through these un-shuttered windows — as if there had to be someone looking through them for anything to be seen.

The uncharitable reading is obviously false. The one who sees when I see something cannot be a little man in my head. There is obviously no little man in my head looking through my eyes or hearing through my ears.  Nor is there any little man in my head sitting at the controls, driving my body.  Neither the thinker of my thoughts nor the agent of my actions is a little man in my head. And even if there were a little man in my head, what would explain his seeing, hearing, controlling etc.? A second homunculus in his head?

A vicious infinite explanatory regress would then be up and running. Now not every infinite regress is vicious; some are, if not virtuous, benign.  The homuncular regress, however, is vicious. It doesn't get the length of a final explanation, which is what we want in philosophy.

Charitably read, however, the Augustinian passage raises  legitimate and important questions.

Who are the seers when we see something?  Who or what is doing the seeing? Not the eyes, since they are mere instruments of vision. We see with our eyes, says Augustine, likening the eyes to windows through which we peer. There is something right about this inasmuch as it is not my eyes that see the sunset, any more than my glasses see the sunset. Put eyeglasses on a statue and visual experiences will accrue neither to the glasses nor to the statue. Eyeglasses, binoculars, telescopes, etc., are clearly instruments of vision, but they themselves see nothing.

But then the same must also be true of the eyes in my head, their parts, the optic nerve, the neural pathways, the visual cortex, and every other material element in the instrumentality of vision. None of these items, taken individually or taken collectively, taken separately or taken in synergy, is the subject of visual experience.  Similarly for ears and tongue. He who has ears to hear, let him hear. But it is not these auditory transducers that hear; you hear and understand — or else you don't. You cannot speak without a tongue, but it is not the tongue that speaks.  You speak using your tongue.

Question is: what does 'you' refer to in the immediately preceding sentence?  Who are you? Who or what am I?  Substituting a third-person designator for the first-person singular pronoun won't get us anywhere. I am BV.  No doubt. But 'BV' refers to a publicly accessible animated body who (or rather that) instantiates various social roles.   You could of course say that the animal bearing my name is the subject of my experiences. That would involve no violation of ordinary language. And it makes sense from  a third-person point of view (POV). It does not, however, make sense from a first-person POV. I see the sunset, not the animal that wears my clothes or bears my name.

And please note that the first-person POV takes precedence over, since it is presupposed by, the third-person POV.  For it is I who adopts the third-person POV.  The third-person POV without an I, an ego, who adopts it  is a view from nowhere by nobody. There is no view of anything without an I whose view it is.

So I ask again: who or what is this I?  Who or what is the ultimate subject of my experience? Who or what is the seer of my sights, the thinker of my thoughts, the agent of my actions, the patient of my pleasures and pains? Two things seem clear: the ultimate subject of my experience, the transcendental subject, is not this hairy beast sitting in my chair, and the ultimate subject, the transcendental subject, is not an homunculus. 

Homunculus

Should we therefore follow Augustine and postulate an immaterial soul substance as the ultimate subject of visual and other experiences? Should we speak with Descartes of a thinking thing, res cogitans, as the source and seat of our cogitationes? Is the res cogitans literally a res, a thing, or is this an illicit reification ('thingification')? On this third approach, call it Platonic-Augustinian-Cartesian, there is a thing that is conscious when I am conscious  of something, but it is not a little man in my head, nor is it my body or my brain or any part of my brain.  It cannot be my body or brain or any part thereof because these items one and all are actual or possible  objects of experience and therefore cannot be the ultimate subject of experience. And so one is tempted to conclude that, since it cannot be anything physical, the ultimate subject of experience must be something meta-physical. 

This third approach, however, has difficulties of its own. The dialectic issues in the thought that the ultimate subject of experience, the transcendental ego, is unobjectifiable. But if so, how could it be a meta-physical thing? Would that not be just another object, an immaterial, purely spiritual, object? Are we not, with the meta-physical move, engaging in an illicit reification just as we would be if we identified the ultimate subject with the brain or with an homunculus? And what would a spiritual thing be if not a subtle body composed of rarefied matter, ghostly matter, geistige Materie. Reification of the ultimate subject appears to terminate in 'spiritual materialism,' which smacks of contradiction.

But maybe there is no contradiction. There may well be ghosts, spooks made of spook stuff.  I told you about my eldritch experience in the Charles Doughty Memorial Suite in which, one night, someone switched on my radio and tuned it to the AM band that I never listened to.  Maybe it was the ghost of the bitter old man who had recently had a heart attack and who had threatened to kill me.  But who was the seer of that ghost's sights and the agent of his actions?   

Do you see the problem? The regress to the ultimate subject of experience is a regress to the wholly unobjectifiable, to 'something' utterly un-thing-like composed of no sort of matter gross or subtle.

Should we adopt a fourth approach and say, instead, that the ultimate subject of experience is no thing at all whether physical or meta-physical? If we go down this road, we end up in the company of Jean-Paul Sartre and Panayot Butchvarov.  

But there is fifth approach, homuncular functionalism, which cannot be explained here. The idea is that there is a regress of stupider and stupider homunculi until we get to a level of homunculi so stupid that they are indistinguishable from mindless matter.  See here and here