What Ever Happened to Bernie Goetz?

Bernard Goetz, mild-mannered electronics nerd, looked like an easy mark, a slap job.  And so he got slapped around, thrown through plate glass windows, mugged and harrassed.  He just wanted to be left alone to tinker in his basement.  One day  he decided not to take it any more and acquired a .38 'equalizer.'  And so the black punks who demanded money of him on the New York subway in December of '84 paid the price to the delight of conservatives and the consternation of liberals. To the former he became a folk hero, to the latter a 'racist.'  It was a huge story back then.  One of the miscreants, James Ramseur, has been found dead of an apparent drug overdose.

Ramseur was freed from prison last year after serving 25 years for a rape, according to NBC NewYork.com. He was one of four black teens shot by Goetz on a train on Dec. 22, 1984, in a shooting that earned Goetz the nickname of "subway vigilante" by city newspapers.

Meanwhile Goetz, 64, flourishes and runs a store called "Vigilante Electronics."

A heart-warming story on this, the eve of Christmas Eve.

The Overeducated

I once had a graduate student with whom I became friends. Ned Flynn, to give him a name, one day told me that after he finished high school he  wanted to follow in his father's footsteps and get a job with the railroad. His mother, however, wanted something 'better' for her son.   She wanted him to go to college, which he did, in the desultory  fashion of many. He ended up declaring a major in psychology and graduating. After spending some time in a monastery, perhaps also at  the instigation of his Irish Catholic mother, and still not knowing quite what  to do with himself, he was accepted into an M.A. program in  philosophy, which is where I met him. After goofing around for several more years, he took a job as a social worker, a job which did not suit him. Last I saw him he was in his mid-thirties and pounding nails.

His complaint to me was that, had he followed his natural bent, he would have had fifteen or so years of job seniority with the railroad, a good paycheck, and a house half paid for. Instead, he wasted years   on studies for which he had no real inclination, and no real talent.  He had no discernible interest in the life of the mind, and like most  working class types could not take it seriously. If you are from the working class, you will know what I mean: 'real' work must involve  grunting and sweating and schlepping heavy loads. Those who work on oil rigs or in the building trades do real work.  Reading, writing, and thinking are activities deemed effete and not quite real. When my  mother saw me reading books, she would sometimes tell me to go outside and do something. That use of 'do' betrayed her working class values.  What she didn't realize was that by reading all those fancy books I  was putting myself in a position where I could live by my wits and avoid the schlepping and grunting. Of course, the purpose of the life of the mind is not to avoid grunt work, with which I have some acquaintance, but to live a truly human life, whether one fills one's belly from it or not.

Overeducation' is perhaps not the right word for cases like my former student Ned. Strictly speaking, one cannot be overeducated since there  is and can be no end to true education. The word is from the Latin  e-ducere, to draw out, and there can be no end to the process of actualizing the potential of a mind with an aptitude for learning.  Perhaps the right word is 'over-credentialed.' It is clear that what most people in pursuit of 'higher education' want is not an education, strictly speaking, but a credential that will gain them admittance to a certain social and/or economic status. 'Education as most people  use it nowadays is a euphemism for a ticket to success, where the latter is defined in terms of money and social position.

Beckwith, Hitch, and the Foundations of Morality

Here.  Excerpt:

. . . [Christopher] Hitchens writes that he and other atheists “believe with certainty that an ethical life can be lived without religion,” thus implying that he and others have direct and incorrigible acquaintance with a natural moral law that informs their judgments about what counts as an ethical life.

But to speak of a natural moral law – a set of abstract, immaterial, unchanging principles of human conduct that apply to all persons in all times and in all places – seems oddly out of place in the universe that Hitchens claimed we occupy, a universe that is at bottom a purposeless vortex of matter, energy, and scientific laws that eventually spit out human beings.

Right.  It is easy to confuse two very different questions, and Sam Harris, one of the Four Horsemen of the New Atheism, does confuse them as I argue here

Q1. Given some agreed-upon moral code, are people who profess some version of theism more 'moral,' i.e., more likely to live in accordance with the agreed-upon code, than those who profess some version of atheism?

However it be answered, (Q1) is not philosophically interesting, except as part of the run-up to a genuine philosophical question, though it is of interest sociologically.   Suppose we grant, arguendo, that the answer to (Q1) is in the negative.  Now contrast (Q1) with

Q2. Given some agreed-upon moral code, are atheists justified in adhering to the code?

The agreed-upon code is one that most or many atheists and theists would accept. Thus don't we all object to child molestation, wanton killing of human beings, rape, theft, lying, and swindling in the manner of Madoff? Even swindlers object to being swindled!  And in objecting to these actions, we mean our objections to be more than merely subjectively valid. When our property is stolen or a neighbor murdered, we consider that an objective wrong has been done. And when the murderer is apprehended, tried, and convicted we judge that something objectively right has been done. Let's not worry about the details or the special cases: killing in self-defense, abortion, etc. Just imagine some minimal objectively binding code that all or most of us, theists and atheists alike, accept.

What (Q2) asks about is the foundation or basis of the agreed-upon objectively binding moral code. This is not a sociological or any kind of empirical question. Nor is it a question in normative ethics. The question is not what we ought to do and leave undone, for we are assuming that we already have a rough answer to that. The question is meta-ethical: what does morality rest on, if on anything?

Beckwith is quite right that the naturalist/physicalist/materialist is going to have a hard time justifying his adherence to the moral prescriptions and proscriptions that most of us, theist and atheist alike, accept.  I would argue that a naturalist/physicalist/materialist ought to be a moral nihilist, and that when these types fight shy of moral nihilism that merely shows an inability or unwillingness on their part to appreciate the logical consequences of their own doctrine, or else some sort of psychological compartmentalization. 

I once knew a hard-assed logical positivist who during the work week practiced his positivism, but on Sundays attended Eastern Orthodox religious services.  He avoided cognitive dissonance by compartmentalizing.

The compartmentalized life is the suboptimal life.  Seek existential unity and consistency.

The Limits of Secularism

Call it synchronicity if you like, but a Port Angeles reader points me to this article by Rabbi Lord Sacks which complements the article by Theroux to which I linked in the previous post.  Excerpt:

So there it is: the evidence that intellectuals have systematically misunderstood the nature of religion and religious observance and have constantly been thinking, for the better part of three centuries, that religion was about to disappear, yet it hasn't. In certain parts of the world it is growing. The 21st century is likely to be a more religious century than the 20th. It is interesting that religion is particularly growing in places like China where the economy is growing.

We must ask ourselves why this is, because it is actually very odd indeed. Think about it: every function that was once performed by religion can now be done by something else. In other words, if you want to explain the world, you don't need Genesis; you have science. If you want to control the world, you don't need prayer; you have technology. If you want to prosper, you don't necessarily seek God's blessing; you have the global economy. You want to control power, you no longer need prophets; you have liberal democracy and elections.

If you're ill, you don't need a priest; you can go to a doctor. If you feel guilty, you don't have to confess; you can go to a psychotherapist instead. If you're depressed, you don't need faith; you can take a pill. If you still need salvation, you can go to today's cathedrals, the shopping centres of Britain — or as one American writer calls them, weapons of mass consumption. Religion seems superfluous, redundant, de trop. Why then does it survive?

My answer is simple. Religion survives because it answers three questions that every reflective person must ask. Who am I? Why am I here? How then shall I live? We will always ask those three questions because homo sapiens is the meaning-seeking animal, and religion has always been our greatest heritage of meaning. You can take science, technology, the liberal democratic state and the market economy as four institutions that characterise modernity, but none of these four will give you an answer to those questions that humans ask.

I came to a similar conclusion in Why Science Will Never Put Religion Out of Business.

Happy Hanukkah

Jewish PhilosophersJewish Chess Players.  Other lists are accessible via these links.  Roots of Jew hatred?  One is undoubtedly envy.  Jews have made contributions to culture far in excess of their numbers.  No wonder they are so hated in the Muslim, and not onlyin the Muslim, world.  And you say you don't believe that man is a fallen being?  I would argue that failure to perceive one's fallen status is part of the Fall.  I will be coming back to this topic.  For now I point out that even Michael Ruse takes it seriously, to his credit, and to the displeasure of the very bright boneheads of the New Atheism, one of whom has recently passed from our midst.

I found no lists for Jewish Hikers or Jewish Outdoorsmen.  Does that help explain Peter Lupu's and Grandpatzer Ed Yetman's utter incomprehension of  my hiking and backpacking and running activities?  It is not only that they would never do such a thing; they express astonishment that anyone should want to do such a thing.

I've heard chess referred to as Jewish athletics. 

Of Christograms and Political Correctness

Monterey Tom liked my 'Xmas' post and sends this:

Many Catholic artifacts related to worship are marked with the Roman letters IHS, which is a partial Latin transliteration of the Greek form of 'Jesus' and can also be read as an acronym for the Latin Iesus Hominum Salvator (Jesus Savior of Man). However, some have construed the IHS to be an acronym for "In this Sign", as in "In this sign you shall conquer." Some who were desirous of defending the judgements of the Obama administration used that last and incorrect notion to justify covering all of the IHS images at Georgetown University two years ago on the ground that Muslims would see  the IHS as a symbol of Christian aggression. My reaction to that  claim is that the event presented the U.S. government with what educators now call a "teachable moment." The only problem being, I suspect,  that no one in the White House gang actually knew the true meaning of the letters and probably shared the Muslim belief that the Crusades were wars of aggression aimed at forcefully converting the peace-loving Muslims and enriching the pope.

Although it is true that 'IHS' is, as Tom writes, "a partial Latin transliteration of of the Greek form of  'Jesus'," it is not true that it abbreviates Iesus Hominum Salvator, at least according to the Catholic Encyclopedia:  "IHS was sometimes wrongly understood as "Jesus Hominum (or Hierosolymae) Salvator", i.e. Jesus, the Saviour of men (or of Jerusalem=Hierosolyma)."

Being a pedant and a quibbler (but in the very  best senses of these terms!), I was all set to quibble with Monterey Tom's use of 'acronym' in connection with 'IHS.'  After all, you cannot pronounce it like a word in the way you can pronounce 'laser' and 'Gestapo' which are clearly acronyms.  But it all depends on how exactly we define 'acronym,' a question I'm not in the mood for.  The Wikipedia article looks good, however.  I am tempted to say that, while every acronym is an abbreviation, not every abbreviation is an acronym.  'IHS' is an abbreviation.

Acronym or not, 'IHS'  is a Christogram, and sometimes a monogram.  As it just now occurred in my text, 'IHS' is not a monogram but a mere abbreviation.  But again it depends on what exactly a monogram is.  According to the Wikipedia monogram article, "A monogram is a motif made by overlapping or combining two or more letters or other graphemes to form one symbol."  Clear examples:

Chi-rhoIHS monogram

In the first monogram one can discern alpha, omega, chi, and rho.  The 'chi' as I said last post is the 'X' is 'Xmas.' 

From pedantry to political correctness and a bit of anti-Pee Cee polemic.  To think that 'IHS' abbreviates In hoc signes vincit shows a contemptible degree of ignorance, but what is worse is to worry about a possible Muslim misreading of the abbreviation.  Only a namby-pamby Pee-Cee dumbass liberal could sink to that level.  That is down there with the supine foolishness of those librul handwringers who wailed, in the wake of 9/11, "What did we do to offend them?"

If hypersensitive Muslims take offense at 'IHS,' that is their problem, not ours.  There is such a thing as taking  inappropriate offense.  See Of Black Holes and Political Correctness: If You Take Offense, is That My Fault?

As for Georgetown's caving to the White House demand, that is contemptible and disgusting, but so typical.  To paraphrase Dennis Prager, there is no one so spineless in all the world as a university administrator.  They should have said loud and clear "Absolutely not!"

Merry CHRISTmas!

‘Merry Xmas’

When I was eight years old or so and first took note of the phrase 'Merry Xmas,' my piety was offended by what I took to be the removal of 'Christ' from 'Christmas' only to be replaced by the universally recognized symbol for an unknown quantity, 'X.' But it wasn't long before I realized that the 'X' was merely a font-challenged typesetter's attempt at rendering the Greek Chi, an ancient abbreviation for 'Christ.' There is therefore nothing at all offensive in the expression 'Xmas.' Year after year, however, certain ignorant Christians who are old enough to know better make the mistake that I made when I was eight and corrected when I was ten.

It just now occurs to me that 'Xmas' may be susceptible of a quasi-Tillichian reading. Paul Tillich is famous for his benighted definition of 'God' as 'whatever is one's ultimate concern.' Well, take the 'X' in 'Xmas' as a variable the values of which are whatever one wants to celebrate at this time of year. So for some, 'Xmas' will amount to Solsticemas, for burglars Swagmas, for materialists Lootmas, for gluttons Foodmas, for inebriates Hoochmas, and for ACLU extremists Antichristianitymas.

A reader suggests some further constructions:

For those who love the capitol of the Czech Republic: Pragmas. For Dutch Reformed theologians of Frisian extraction who think Christmas is silly: Hoekemas. For Dutch Reformed philosophy professors of Frisian extraction who like preserves on their toast: Jellemas. For fans of older British sci-fi flicks: Quatermas. For those who buy every special seasonal periodical they can get their hands on: Magmas. One could probably multiply such examples ad nauseum, so I won't.

How could an ACLU bonehead object to 'Xmas' so construed? No doubt he would find a way.

A while back I quipped that "Aporeticians qua aporeticians do not celebrate Christmas. They celebrate Enigmas."  My man Hodges shot back:  "But they do celebrate 'X-mas'! (Or maybe they 'cerebrate' it?)"

Merry Chimas to all, and to all a good night.

The ‘Is’ of Identity and the ‘Is’ of Predication

Bill Clinton may have brought the matter to national attention, but philosophers have long appreciated that much can ride on what the meaning of 'is' is. 

Edward of London has a very good post in which he raises the question whether the standard analytic distinction between the 'is' of identity and the 'is' of predication is but fallout from an antecedent decision to adhere to an absolute distinction between names and predicates.  If the distinction is absolute, as Frege and his epigoni maintain, then names cannot occur in predicate position, and a distinction between the two uses of 'is' is the consequence.  But what if no such absolute distinction is made?  Could one then dispense with the standard analytic distinction?  Or are there reasons independent of Frege's function-argument analysis of propositions for upholding the distinction between the two uses of 'is'?

To illustrate the putative distinction, consider

1. George Orwell is Eric Blair

and

2. George Orwell is famous.

Both sentences feature a token of 'is.'  Now ask yourself: is 'is' functioning in the same way in both sentences? The standard analytic line is that 'is' functions differently in the two sentences.  In (1) it expresses identity; in (2) it expresses predication. Identity, among other features, is symmetrical; predication is not.  That suffices to distinguish the two uses of 'is.'  'Famous' is predicable of Orwell, but Orwell is not predicable of  'famous.'  But if Blair is Orwell, then Orwell is Blair.

Now it is clear, I think, that if one begins with the absolute name-predicate distinction, then the other distinction is also required. For if  'Eric Blair' in (1) cannot be construed as a predicate, then surely the 'is' in (1) does not express predication.  The question I am raising, however, is whether the distinction between the two uses of 'is' arises ONLY IF  one distinguishes absolutely and categorially between names and predicates.

Fred Sommers seems to think so.  Referencing the example 'The morning star is Venus,' Sommers  writes, "Clearly it is only after one has adopted the syntax that prohibits the predication of proper names that one is forced to read 'a is b' dyadically and to see in it a sign of identity." (The Logic of Natural Language, Oxford 1982, p. 121, emphasis added)  The contemporary reader will of course wonder how else 'a is b' could be read if it is not read as expressing a dyadic relation between a and b.  How the devil could the 'is' in 'a is b' be read as a copula?

This is what throws me about the scholastic stuff peddled by Ed and others.  In 'Orwell is famous' they seem to be wanting to say that 'Orwell' and 'famous' refer to the same thing.  But what could that mean? 

First of all, 'Orwell' and 'famous' do not have the same extension: there are many famous people, but only  one Orwell.  But even if Orwell were the only famous person, Orwell would not be identical to the only famous person.  Necessarily, Orwell is Orwell; but it is not the case that, necessarily, Orwell is the only famous person, even if it is true that Orwell is the only famous person, which he  isn't.

If you tell me that only 'Orwell' has a referent, but not 'famous,' then I will reply that that is nominalism for the crazy house.  Do you really want to say or imply that Orwell is famous because in English we apply the predicate 'famous' to him?  That's ass-backwards or bass-ackwards, one.  We correctly apply 'famous' to him because he is, in reality, famous.  (That his fame is a social fact doesn't  make it language-dependent.)  Do you really want to say or imply that, were we speaking German, Orwell would not be famous but beruehmt?  'Famous' is a word of English while beruehmt is its German equivalent.  The property, however, belongs to neither language.  If you say there are no properties, only predicates, then that smacks of the loony bin.

Suppose 'Orwell' refers to the concrete individual Orwell, and 'famous' refers to the property, being-famous.  Then you get for your trouble a different set of difficulties.  I don't deny them!  But these difficulties do not show that the scholastic view is in the clear.

This pattern repeats itself throughout philosophy.  I believe I have shown that materialism about the mind faces insuperable objections, and that only those in the grip of naturalist ideology could fail to feel their force.   But it won't do any good to say that substance dualism also faces insuperable objections.  For it could be that both are false/incoherent.  In fact, it could be that every theory proposed (and proposable by us) in solution of  every philosophical problem is false/incoherent. 

Reinhardt Grossmann (1931-2010)

An obituary by his Indiana University colleague, Nino Cocchiarella. 

"Grossmann was well known among his colleagues for his eagerness to discuss philosophical problems and to engage in sustained debate on fundamental positions."  Sounds right.  When I, a stranger, wrote Grossmann sometime in the '80s and posed some questions for him, he responded in a thorough and friendly manner.  May peace be upon him.

Here is another obituary  by Javier Cumpa and Erwin Tegtmeier.  It ends with a tantalizing reference to the book Grossmann was working on when felled by a massive stroke: Facts.  I hope Grossmann's literary executors make the manuscript available.

The summer of '84 found me in Bloomington, Indiana.  Thanks to the largesse of the American taxpayer, I was a 'seminarian' in Hector-Neri Castaneda's NEH Summer Seminar.  One afternoon we repaired to a bar where we encountered Professor Grossmann.  He told a story about the 19th century  German philosopher Kuno Fischer, who was a big name in his day and a professor at Heidelberg.  One day some workmen were making a racket outside his apartment.  This incensed the good professor and he warned the workmen: "If you don't stop making this noise, I will leave Heidelberg!"  The workmen stopped.  Grossmann remarked that if Quine were  to have lodged a similar complaint, the workmen  would have laughed and bid him goodbye. 

Memory, Memory Traces, and Causation

Hippy-trippyPassing a lady in the supermarket I catch a whiff of patchouli.  Her scent puts me in mind of hippy-trippy Pamela from the summer of '69.  An olfactory stimulus in the present causes a memory, also in the present, of an event long past, a tête-à-tête with a certain girl.  How ordinary, but how strange! Suddenly I am 'brought back' to the fantastic and far-off summer of '69.  Ah yes!  What is memory and how does it work?  How is it even possible? 

Let's start with the 'datanic' as I like to say:

1. There are (veridical) memories through which we gain epistemic access to the actual past, to events that really happened.  The above example is a case of episodic personal memory.  I remember an event in my personal past.  To be precise, I remember my having experienced an event in my personal past.  My having been born by Caesarean section is also an episode from my personal past, and I remember that that was my mode of exiting my mother's body; but I don't remember experiencing that transition.  So not every autobiographical memory is a personal episodic memory.  The latter is the only sort of memory I will be discussing in this post.  The sentence in boldface is the nonnegotiable starting point of our investigation. 

We now add a couple of more theoretical and less datanic propositions, ones which are not obvious, but are  plausible and accepted by many theorists:

2. Memory is a causal notion.  A mental image of a past event needn't be a memory of a past event.  So what makes a mental image of a past event a memory image?  Its causal history.  My present memory has a causal history that begins with the event in 1969 as I experienced it.

3. There is no action at a temporal distance.  There is no direct causation over a temporal gap.  There are no remote causes; every cause is a proximate cause.  A necessary ingredient of causation is spatiotemporal contiguity.  So while memory is a causal notion, my present memory of the '69 event is not directly caused by that event.  For how could an event that no longer exists directly cause, over a decades-long temporal gap, a memory event in the present?  That would seem to be something 'spooky,' a kind of magic. 

Each of these propositions lays strong claim to our acceptance.  But how can they all be true?  (1) and (2) taken together appear to entail the negation of (3).  How then can we accommodate them all?

Memory trace theories provide a means of accommodation.  Suppose there are memory traces or engrams engraved in some medium.  For materialists this medium will have to be the brain.  One way to think of a memory trace is as a brain modification that was caused at the time of the original experience, and that persists since that time.  So the encounter with Pam in '69 induced a change in my brain, left a trace there, a trace which has persisted since then.  When I passed the patchouli lady in the supermarket, the olfactory stimulus 'activated' the dormant memory trace.   This activation of the memory trace either is or causes the memory experience whose intentional object is the past event.  With the help of memory traces we get causation wthout action at a temporal distance. 

(Far out, man!)

The theory or theory-schema just outlined seems to allow us to uphold each of the above propositions. In particular, it seems to allow us to explain how a present memory of a past event can be caused by the past event without the past event having to jump the decades-long temporal gap between event remembered and memory.  The memory trace laid down in '69 by the original experience exists in the present and is activated in the present by the sensory stimulus.  Thus the temporal contiguity requirement is satisfied.  And if the medium in which the memory traces are stored is the brain or central nervous system, then the spatial contiguity requirement is also satisfied.

Question:  Could memory traces play merely causal roles?

Given (2) and (3), it seems that memory traces must be introduced as causal mediators between past and present.  But could they be just that?  Or must they also play a representational role?  Intuitively, it seems that nothing could be a memory trace unless it somehow represented the event of which it is a trace. If E isthe original experience, and T is E's trace, then it it seems we must say that T is of E in a two-fold sense corresponding to the difference between the subjective and objective genitive.  First, T is of E in that T is E's trace, the one that E caused. Second, T is of E in that T represents E. 

It seems obvious that a trace must represent.  In my example,the sensory stimulus (the whiff of patchouli) is not of or about the '69 event.  It merely activates the trace, rendering the dispositional occurrent.  But the memory is about the '69 event.  So the aboutness must reside in the trace.  The trace must represent the event that caused it  – and no other past event.  The memory represents because the trace represents.  If the trace didn't represent anything, how could the memory — which is merely the activation of the trace or an immediate causal consequence of the activation of the trace — represent anything?  How a persisting brain modification — however it is conceived, whether it is static or dynamic, whether localized or nonlocalized — can represent anything is an important and vexing question but one I will discuss in a later post.

Right now I want to nail down the claim that memory traces cannot play a merely causal role, but must also bear the burden of representation.

Suppose a number of strangers visit me briefly.  I want to remember them,  but my power of memory is very weak and I know I will not remember them without the aid of some mnemonic device.  So I have my visitors leave calling cards.  They do so, except that they are all the same, and all blank (white).  These blank cards are their traces, one per visitor.   The visitors leave, but the cards remain behind as traces of their visit.  I store the cards in a drawer.  I 'activate' a card by pulling it out of storage and looking at it.  I am then reminded (at most) that I had a visitor, but not put in mind of any particular visitor such as Tom.  So even if the card in my hand was produced by Tom, that card is useless for the purpose of remembering Tom.  Likewise for every other card.  Each was produced by someone in particular and only by that person; but none of them 'bring back' any particular person. 

Bear in my mind that I don't directly remember any of my visitors.  My only memory access to them is via their traces, their calling cards.  For the visitors are long gone just like the '69 experience.  So the problem is not merely that I don't know which card is from which person; the problem is that I cannot even distinguish the persons.  

Had each visitor left a differently  colored card, that would not have helped.  Nor are matters helped if each visitor leaves a different sort of trace; a bottle cap, a spark plug, a lock of hair, a guitar pick.  Even if  Tom is a guitar player and leaves a guitar pick, that is unhelpful too  since I have no access to Tom except via his trace. 

So it doesn't matter whether my ten visitors leave ten tokens of the same type, or ten tokens each of a different type.  Either way I won't be able to remember them via the traces they leave behind.  Clearly, what I need from each visitor is an item that uniquely represents him or her — as opposed to an item that is merely caused to be in my house by the visitor.  Suppose Tom left a unique guitar pick, the only one of its kind in existence.  That wouldn't help either since no inspection of that unique pick could reveal that it was of Tom rather than of Eric or Eric's cat.  Ditto if Tom has signed his card or his pick 'Tom Riff.'  That might be a phony name, or the name of him and his guitar — doesn't B. B . King call his guitar 'Lucille'?

If I can remember that it was Tom who left the guitar pick, then of course I don't need the guitar pick to remember Tom by.  I simply remember Tom directly without the need for a trace.  On the other hand, if I do need a trace in order to remember long gone Tom, then that trace must have representational power: it cannot be merely something that plays a causal role. 

Traces theories have to avoid both circularity and vicious infinite regress.

Circularity.  To explain the phenomenon of memory, the trace theory posits the existence of memory traces.  But if the explanation in terms of traces ends up presupposing memory, then the theory is circular and worthless.  If what makes the guitar pick a trace of Tom is that I remember that Tom left it, then the explanation is circular.  Now consider the trace T in my brain which, when activated by stimulus S causes a memory M of past experience E.  M represents E because T represents E.  What makes T represent E? What makes the memory trace caused by the encounter with Pam in '69 represent Pam or my talking with her?  The answer cannot be that I remember the memory trace being caused by the encounter with Pam.  For that would be blatantly circular.  Besides, memory traces in the brain are not accessible to introspection.

Infinite Regress.  Our question is: what makes T represent E and nothing else?  To avoid circularity one might say this:  There is a trace T* which records the fact of E's production of T, and T represents E in virtue of T*.  But this leads to a  vicious infinite regress.   Suppose Sally leaves a photo of herself.  How do I know that the photo is of Sally and not of her sister Ally?  If you say that I directly remember Sally and thereby know that the photo is unambiguously of her, then you move in a circle.  You may as well just say that we remember directly and not via traces.  So, to hold onto the trace theory, one might say the following:  There is a photo of Sally and her photograph, side by side.  Inspection of  this photo reveals that that the first photo is of Sally.  But this leads to regress:  what makes the second photo a photo of the first?

Conclusion:  To avoid both circularity and infinite regress, memory traces must possess intrinsic representational power.  Their role cannot be merely causal.

A later post will then address the question whether memory traces could have intrinsic representational power.  If you are a regular reader of this blog you will be able to guess my answer.

REFERENCE:  John Heil, "Traces of Things Past," Philosophy of Science, vol. 45, no. 1 (March 1978), pp. 60-72.  My calling card example above is a reworking of Heil's tennis ball example.

 

Merry Scroogemas!

ScroogeIn this season especially we ought to find a kind word to say about the much maligned Ebeneezer Scrooge. Here's mine: Without Scrooge, the sexually prolific Cratchit wouldn't have a job and be able to support his brood! This thought is developed by Michael Levin in In Defense of Scrooge.

And is there not something preternaturally knuckleheaded about the calls from some liberals that the presentation of Dickens' masterpiece be banned? They ought to consider that there is more of anti-Capitalism in it than of Christianity — an irony that no doubt escapes their shallow pates.

Minimalist and Maximalist Modes of Holiday Impersonality

'Tis the season for the letter carriers of the world to groan under their useless burdens of impersonal greetings.

Impersonality in the minimalist style may take the form of a store-bought card with a pre-fabricated message to which is appended an embossed name. A step up from this is a handwritten name. Slightly better still is the nowadays common family picture with handwritten name but no message.

The maximalist style is far worse. Now we are in for a lengthy litany of the manifold accomplishments of the sender and his family which litany may run to a page or two of single-spaced text.

One size fits all. No attempt to address any one person as a person.

"It's humbug, I tell you, humbug!"