Top AI researchers such as Geoffrey Hinton, the "Godfather of AI," hold that advanced AI systems are conscious. That is far from obvious, and may even be demonstrably false if we consider the phenomenon of the unity of consciousness. I will first explain the phenomenon in question, and then conclude that AI systems cannot accommodate it.
Yes, my boy, for no one can suppose that in each of us, as in a sort of Trojan horse, there are perched a number of unconnected senses which do not all meet in some one nature, the mind, or whatever we please to call it, of which they are the instruments, and with which through them we perceive the objects of sense. (Emphasis added, tr. Benjamin Jowett)
The issue here, to put it in Kantian terms, is the unity of consciousness in the synthesis of a manifold of sensory data. Long before Kant, and long before Leibniz, Plato was well aware of the problem of the unity of consciousness. (It is not for nothing that Alfred North Whitehead described Western philosophy as a series of footnotes to Plato.)
Sitting before a fire, I see the flames, feel the heat, smell the smoke, and hear the crackling of the logs. The sensory data are unified in one consciousness of a selfsame object, the fire in the fireplace. This unification does not take place in the eyes or in the ears or in the nostrils or in any other sense organ, and to say that it takes place in the brain is not a good answer. For the brain is a partite physical thing extended in space. If the unity of consciousness is identified with a portion of the brain, then the unity is destroyed. For no matter how small the portion of the brain in which the unification is supposed to occur, it has proper parts external to each other. Every portion of the brain, no matter how small, is a complex entity. But consciousness in the synthesis of a manifold is a simple unity. Hence the unity of consciousness cannot be understood along materialist lines.
This argument against materialist theories of mind from the unity of consciousness may also be developed as follows.
Diachronic Unity of Consciousness, Example One
Suppose my mental state passes from one that is pleasurable to one that is painful. Observing a beautiful Arizona sunset, my reverie is suddenly broken by the piercing noise of a smoke detector. Not only is the painful state painful, the transition from the pleasurable state to the painful one is itself painful. The fact that the transition is painful shows that it is directly perceived. It is not as if there is merely a succession of consciousnesses (conscious states), one pleasurable the other painful; there is in addition a consciousness of their succession. For there is a consciousness of the transition from the pleasant state to the painful state, a consciousness that embraces both of the states, and so cannot be reductively analyzed into them. But a consciousness of their succession is a consciousness of their succession in one subject, in one unity of consciousness. It is a consciousness of the numerical identity of the self through the transition from the pleasurable state to the painful one. Passing from a pleasurable state to a painful one, there is not only an awareness of a pleasant state followed by an awareness of a painful one, but also an awareness that the one who was in a pleasurable state is strictly and numerically the same as the one who is now in a painful state. This sameness is phenomenologically given, although our access to this phenomenon is easily blocked by inappropriate models taken from the physical world. Without the consciousness of sameness, there would be no consciousness of transition.
What this phenomenological argument shows is that the self cannot be a mere diachronic bundle or collection of states. The self is a transtemporal unity distinct from its states whether these states are taken distributively (one by one) or collectively (all together).
May we conclude from the phenomenology of the situation that there is a simple, immaterial, meta-physical substance that each one of us is and that is the ontological support of the phenomenologically given unity of consciousness? May we make the old-time school-metaphysical moves from the simplicity of this soul substance to it immortality? Maybe not! This is a further step that needs to be carefully considered. I don't rule it out, but I also don't rule it in. I don't need to take the further step for my present purpose, which is merely to show that a computing machine, no matter how complex or how fast its processing, cannot be conscious. No material system can be conscious. For the moment I content myself with the negative claim: no material system can be conscious. It follows straightaway that no AI system can be conscious.
Diachronic Unity of Consciousness, Example Two
Another example is provided by the hearing of a melody. To hear the melody Do-Re-Mi, it does not suffice that there be a hearing of Do, followed by a hearing of Re, followed by a hearing of Mi. For those three acts of hearing could occur in that sequence in three distinct subjects, in which case they would not add up to the hearing of a melody. (Tom, Dick, and Harry can divide up the task of loading a truck, but not the ‘task’ of hearing a melody, or that of understanding a sentence.) But now suppose the acts of hearing occur in the same subject, but that this subject is not a unitary and self-same individual but just the bundle of these three acts, call them A1, A2, and A3. When A1 ceases, A2 begins, and when A2 ceases, A3 begins: they do not overlap. In which act is the hearing of the melody? A3 is the only likely candidate, but surely it cannot be a hearing of the melody.
This is because the awareness of a melody involves the awareness of the (musical not temporal) intervals between the notes, and to apprehend these intervals there must be a retention (to use Husserl’s term) in the present act A3 of the past acts A2 and A1. Without this phenomenological presence of the past acts in the present act, there would be no awareness in the present of the melody. This implies that the self cannot be a mere bundle of perceptions externally related to each other, but must be a peculiarly intimate unity of perceptions in which the present perception A3 includes the immediately past ones A2 and A1 as temporally past but also as phenomenologically present in the mode of retention. The fact that we hear melodies thus shows that there must be a self-same and unitary self through the period of time between the onset of the melody and its completion. This unitary self is neither identical to the sum or collection of A1, A2, and A3, nor is it identical to something wholly distinct from them. Nor of course is it identical to any one of them or any two of them. This unitary self is co-given whenever one hears a melody. (This seems to imply that all consciousness is at least implicitly self-consciousness. This is a topic for a later post.)
Diachronic -Synchronic Unity of Consciousness
Now consider a more complicated example in which I hear two chords, one after the other, the first major, the second minor. I hear the major chord C-E-G, and then I hear the minor chord C-E flat-G. But I also hear the difference between them. How is the awareness of the major-minor difference possible? One condition of this possibility is the diachronic unity of consciousness. But there is also a second condition. The hearing of the major chord as major cannot be analyzed without remainder into an act of hearing C, an act of hearing E, and an act of hearing G, even when all occur simultaneously. For to hear the three notes as a major chord, I must apprehend the 1-3-5 musical interval that they instantiate. But this is possible only because the whole of my present consciousness is more than the sum of its parts. This whole is no doubt made up of the part-consciousnesses, but it is not exhausted by them. For it is also a consciousness of the relatedness of the notes. But this consciousness of relatedness is not something in addition to the other acts of consciousness: it includes them and embraces them without being reducible to them. So here we have an example of the diachronic-synchronic unity of consciousness.
These considerations appear to put paid to the conceit that AI systems can be conscious.
Or have I gone too far? You've heard me say that in philosophy there are few if any rationally compelling, ineluctably decisive, arguments for substantive theses. Are the above arguments among the few? Further questions obtrude themselves, for example, "What do you mean by 'material system'?" "Could a panpsychist uphold the consciousness of advanced AI systems?"
Vita brevis, philosophia longa.
Leave a Reply to DaveB Cancel reply