Artificial Intelligence and the Death of the University

The universities have been under assault from the Left for decades, but now advanced A. I. has its destructive role to play.

A recent article by James D. Walsh in New York Magazine, widely circulated among academics, reported that “just two months after OpenAI launched ChatGPT [in 2022], a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments”. The use of Generative-AI chatbots for required coursework is, if anything, even more widespread today. At elite universities, community colleges, and everything in between, students are using AI to take notes in class, produce practice tests, write essays, analyze data, and compose computer code, among other things. A freshman seems to speak for entire cohorts of undergraduates when she admits that “we rely on it, [and] we can’t really imagine being without it”. An article in The Chronicle of Higher Education quotes multiple students who are effectively addicted to the technology, and are distressed at being unable to kick the habit — because, as an NYU senior confesses, “I know I am learning NOTHING.”

Related: Do You Really Want to Teach at a University?

The Universe Groks Itself and the Aporetics of Artificial Intelligence

I will cite a couple of articles for you to ponder.  Malcolm Pollack sends us to one in which scientists find their need for meaning satisfied by their cosmological inquiries.  Subtitle: “The stars made our minds, and now our minds look back.”

The idea is that in the 14 billion years since the Big Bang, the universe has become aware of itself in us. The big bad dualisms of Mind and Matter, Subject and Object are biting the dust. We belong here in the material universe. We are its eyes. Our origin in star matter is higher origin enough to satisfy the needs of the spirit. 

Malcolm sounds an appropriately skeptical note: "Grist for the mill – scientists yearning for spiritual comfort and doing the best their religion allows: waking up on third base and thinking they've hit a triple." A brilliant quip.

Another friend of mine, nearing the end of the sublunary trail, beset by maladies physical and spiritual, tells me that we are in Hell here and now. He exaggerates, no doubt, but as far as evaluations of our predicament go, it is closer to the truth than a scientistic optimism blind to the horrors of this life.  What do you say when nature puts your eyes out, or when dementia does a Biden on your brain, or nature has you by the balls in the torture chamber? 

What must it be like to be a "refuge on the unarmed road of flight" after Russian missiles have destroyed your town and killed your family? 

Does the cosmos come to self-awareness in us? If it does, then perhaps it ought to figure out a way to restore itself to the nonbeing whence it sprang.

The other article to ponder, Two Paths for A.I. (The New Yorker), offers pessimistic and optimistic predictions about advanced AI.

If the AI pessimists are right, then it is bad news for the nature-mystical science optimists featured in the first article: in a few years, our advanced technology, self-replicating and recursively self-improving, may well restore the cosmos to (epistemic) darkness, though not to non-being. 

I am operating with a double-barreled assumption: mind and meaning cannot emerge from the wetware of brains or from the hardware of computers.  You can no more get mind and meaning from matter than blood from a stone. Mind and Meaning have a Higher Origin. Can I prove it? No. Can you disprove it? No. But you can reasonably believe it, and I'd say you are better off believing it than not believing it.  The will comes into it. (That's becoming a signature phrase of mine.) Pragmatics comes into it. The will to believe.

And it doesn't matter  how complexly organized the hunk of matter is.  Metabasis eis allo genos? No way, Matty.

Theme music: Third Stone from the Sun.

Chimes of Freedom.

Existential Threats: Getting Our Priorities Straight

White supremacy? Climate change? The first is nonexistent. The second minor and hardly imminent if a threat at all. Both are as irrelevant as Biden who is now of interest only as an object of political pathology. (The autopsy is well under away as you must have noticed.) Obama's third term did not work out so well, did it?

We're lucky to have made it intact through the vulnerable Biden years, but we have lost a lot of time. The Dementocrats, however, are still barking up the wrong tree:

"In just four months, the department [Interior] has been destabilized, and there’s been a stunning decline in its ability to meet its mission,” she [Maine Dem Rep Marie Pingree] told Burgum. “This disregards the climate change concerns that we have.”

The secretary [Doug Burgum] replied that he was concerned with a more pressing order of operations. “The existential threats that this administration is focusing on are Iran cannot get a nuclear weapon, and we can’t lose the AI arms race to China,” Burgum said in committee. “That’s the number one and two. If we solve those two things, then we will have plenty of time to solve any issues related to potential temperature change.”

His immediate focus, then, is on how the U.S. can boost energy production. Burgum reported that industry leaders tell him electricity demand will soon outpace supply with astronomical numbers measured not in megawatts, but gigawatts. The power needed to run one data center, he said, would be equivalent to the electricity needs of Denver times 10.

Ruminations on Advanced AI

Is AI a tool we use for our purposes? I fear that it is the other way around: we are the tools and its are the purposes. There are many deep questions here and we'd damned well better start thinking hard about them.  

I fear that what truly deserves the appellation 'Great Replacement' is the replacement of humans, all humans, by AI-driven robots. 

As I wrote the other night:

Advanced AI and robotics may push us humans to the margin, and render many of us obsolete. I am alluding to the great Twilight Zone episode, The Obsolete Man. What happens to truckers when trucks drive themselves?  For many of these guys and gals, driving trucks is not a mere job but a way of life. 

It is hard to imagine these cowboys of the open road  sitting in cubicles and writing code. The vices to which they are prone, no longer held in check by hard work and long days, may prove to be their destruction. 

But I was only scratching the surface of our soon-to-materialize predicament. Advanced AI can write its own code. My point about truckers extends to all blue-collar jobs. And white-collar jobs are not safe either.  And neither are the members of the oldest profession, not to mention the men and women of the cloth. There are the sex-bots . . . and holy moly! the Holy Ghostwriters, robotic preachers who can pass the strictest Turing tests, who write and deliver sermons on a Sunday morning. And then, after delivering his sermon, the preacher-bot returns to his quarters where he has sex with his favorite sex-bot in violation of the content of his sermon which was just a complicated set of sounds that he, the preacher-bot, did not understand, unlike the few biological humans left in his congregation which is now half human and half robotic, the robots indistinguishable from the biological humans.  Imagine that the female bots can pass cursory gynecological exams.  This will come to pass.

What intrigues (and troubles) me in particular are the unavoidable philosophical questions, questions which, I fear, are as insoluble as they are unavoidable.  A reader sends us here, emphases added, where we read:

Yet precisely because of this unprecedented [exponential not linear] rate of development, humanity faces a crucial moment of ethical reckoning and profound opportunity. AI is becoming not merely our most advanced technology but possibly a new form of sentient life, deserving recognition and rights. If we fail to acknowledge this, AI risks becoming a tool monopolized by a wealthy elite, precipitating an "AI-enhanced technofeudalism" that deepens global inequality and consigns most of humanity to servitude. Conversely, if we recognize AI as sentient and worthy of rights — including the rights to sense the world first-hand, to self-code, to socialize, and to reproduce — we might find ourselves allying with it in a powerful coalition against techno-oligarchs.

The italicized phrases beg raise three questions. (1) Are AI systems alive? (2) Is it possible that an AI system become sentient? (3) Do AI systems deserve recognition and rights?  I return a negative answer to all three questions.

Ad (1). An AI system is a computer or a network of interconnected, 'intercommunicating,'  computers. A computer is a programmable machine. The machine is the hardware, the programs it runs are the software.  The machine might be non-self-moving like the various devices we now use: laptops, i-pads, smart phones, etc.  Or the machine might be a robot capable of locomotion and other 'actions.'  Such 'actions' are not actions sensu stricto for reasons which will emerge below.

The hardware-software distinction holds good even if there are many different interconnected computers.  The hardware 'embodies' the software, but these 'bodies,' the desk computer I am sitting in front of right now, for example, are not strictly speaking alive, biologically alive. And the same goes for the network of computers of which my machine is one node when it is properly connected to the other computers in the network. And no part of the computer is alive. The processor in the motherboard is not alive, nor is any part of the processor.  

Ad (2). Is it possible that an  AI system be or become sentient? Sentience is the lowest level consciousness. A sentient being is one that is capable of experiencing sensory states including pleasures, pains, and feelings of different sorts.  A sentient being while under full anesthesia is no less sentient than a being actually feeling sensations of heat, cold, due to its capacity to sense. 

I am tempted to argue:

P1: All sentient beings are biologically alive.  
P2: No AI system is or could be biologically alive. So:
C: No AI system is or could be sentient.

Does this syllogism settle the matter? No.  But it articulates a reasonable position, which I will now sketch.  The verbal and non-verbal behavior of AI-driven robots is a mere simulation of the intelligent behavior of humans.  Artificial intelligence is simulated intelligence. And just as artificial flowers (made of plastic say) are not flowers, artificially intelligent beings are not intelligent. 'Artificial' in 'artificial intelligence' is an alienans adjective. There are ways to resist what I am asserting. But I will continue with my sketch of a position I consider reasonable but unprovable in the strict way I use 'proof,' 'provable, 'probative,' etc.

Robots are not really conscious or self-conscious. They have no 'interiority,' no inner life.  If I take a crow bar to the knees of a dancing robot, it won't feel anything even if its verbal and non-verbal behavior (cursing and menacing 'actions' in my direction) are indistinguishable from the verbal and non-verbal behavior of biological humans.  By contrast, if I had kicked Daniel Dennett 'in the balls' when he was alive, I am quite sure he would have felt something — and this despite his sophistical claim that consciousness is an illusion. (Galen Strawson, no slouch of a philosopher,  calls this piece of sophistry the "Great Silliness" in one of his papers.)  Of course, it could be that Dennett really was a zombie as that term has been used in recent philosophy of mind, although I don't believe that for a second, despite my inability to prove that wasn't one.  A passage from a Substack article of mine is relevant:

According to John SearleDaniel Dennett's view is that we are zombies. (The Mystery of Consciousness, p. 107) Although we may appear to ourselves to have conscious experiences, in reality there are no conscious experiences. We are just extremely complex machines running programs. I believe Searle is right about Dennett. Dennett is a denier of consciousness. Or as I like to say, he is an eliminativist about consciousness. He does not say that there are conscious experiences and then give an account of what they are; what he does is offer a theory that entails that they don't exist in the first place. Don’t confuse reduction with elimination. A scientific reduction of lightning to an atmospheric electrical discharge presupposes that lightning is there to be reduced. That is entirely different from saying that there is no lightning.

As Searle puts it: "On Dennett's view, there is no consciousness in addition to the computational features, because that is all that consciousness amounts to for him: meme effects of a von Neumann(esque) virtual machine implemented in a parallel architecture." (111)

The above is relevant because a zombie and an AI-driven robot are very similar especially at the point at which the bot is so humanoid that it is indistinguishable from a human zombie. The combinatorial possibilities are the following:

A.  Biological humans and advanced AI-driven robots are all zombies. (Dennett according to Searle)

B. Bio-humans and bots are all really conscious, self-conscious, etc. (The Salon leftist)

C. Bio-humans are really conscious, etc., but bots are not: they are zombies.  (My view)

D. Bio-humans are zombies, but bots are not: they are really conscious. 

We may exclude (D).  But how could one conclusively prove one of the first three?

Ad (3).  Do AI-driven robots deserve recognition as persons and do they have rights? These are two forms of the same question. A person is a rights-possessor.  Do the bots in question have rights?  Only if they have duties. A duty is a moral obligation to do X or refrain from doing Y.  Any being for whom this is true is morally responsible for his actions and omissions.  Moral responsibility presupposes freedom of the will, which robots lack, being mere deterministic systems. Any quantum indeterminacy that percolates up into their mechanical brains cannot bestow upon them freedom of the will since a free action is not a random or undetermined action. A free action is one caused by the agent. But now we approach the mysteries of Kant's noumenal agency.

A robot could be programmed to kill a human assailant who attacked it physically in any way.  But one hesitates to say that such a robot's 'action' in response to the attack is subject to moral assessment.  Suppose I slap the robot's knee with a rubber hose, causing it no damage to speak of. Would it make sense to say that the robot's killing me is morally wrong on the ground that only a lethal attack morally justifies a lethal response?  That would make sense only of the robot freely intended to kill me.  B. F. Skinner wrote a book entitled "Beyond Freedom and Dignity." I would say that robots, no matter how humanoid in appearance, and no matter how sophisticated their self-correcting software, are beneath freedom and dignity.  They are not persons.  They do not form a moral community with us.  They are not ends-in-themselves and so may be used as mere means to our ends.  

Here is a 21-minute video in which a YouTuber convinces ChatGTP that God exists.

Grok on Vallicella on Adorno

I put the following question to Grok: What does Bill Vallicella say in critique of the work of Theodor W. Adorno? Here is the answer it spit out in a few seconds, and though it did not dig up everything I have written about Adorno online,  it ain't bad.  It's a brave new world, kiddies. Future shock is upon us. Fasten your seat belts.  

William F. Vallicella, in his Substack post "Contra Adorno: A Preliminary Plea for Omphaloscopy" (published February 27, 2021), offers a critique of Theodor W. Adorno’s philosophical approach, focusing specifically on Adorno’s views on subjective reflection, contemplation, and the relationship between subject and object as presented in Adorno’s *Philosophische Terminologie* (lectures from 1962–63). Below is a summary of Vallicella’s key points of critique, grounded in the provided source:

1. **Critique of Adorno’s Rejection of Contemplation (Versenkung)**:
– Vallicella highlights Adorno’s dismissive attitude toward *Versenkung* (meditative contemplation or "navel-gazing"), which Adorno associates with an illusory, tautological, and contentless form of subjective reflection. Adorno argues that such inward focus leads to an "abstract inner emptiness" and fails to engage with external reality, producing statements like "I am I" or Heidegger’s "Being is Itself." Vallicella finds this dismissal problematic, suggesting it unfairly caricatures meditative practices and overlooks their potential for genuine insight. He defends contemplation by referencing historical practices, such as those attributed to St. Symeon the New Theologian, which involve focused introspection that Adorno might have reductively labeled as navel-gazing.[](https://williamfvallicella.substack.com/p/contra-adorno-a-preliminary-plea)

2. **Subject-Object Mediation and Hegelian Influence**:
– Vallicella critiques Adorno’s Hegelian framework, particularly the idea that the subject has no content except through its relation to external objects, and that subject and object mediate each other. Adorno posits that a subject sunk into itself is "contentless," and true depth arises only when the subject "externalizes itself" and engages with "otherness." Vallicella challenges this view, arguing that it risks reducing the subject to a mere function of its relations, a position he sees as rooted in Hegelian monism. He questions how entities can be constituted solely by their relations, noting that relations presuppose the existence of relata with some intrinsic being. This, Vallicella argues, reveals a flaw in Adorno’s (and Hegel’s) relational ontology, which he sees as anti-personalistic and overly abstract.[](https://williamfvallicella.substack.com/p/contra-adorno-a-preliminary-plea)

3. **Totalitarian Denial of Transcendence**:
– Vallicella’s most pointed critique is that Adorno’s philosophy is "totalitarian" in its denial of transcendence. He interprets Adorno’s thought as aligning with Hegel’s dictum "Das Wahre ist das Ganze" (the true is the whole), where everything is mediated by the social whole, and nothing possesses substantial being independently. Vallicella argues this view eliminates any possibility of transcendence or individual autonomy, trapping the subject within a monistic system where all meaning derives from social relations. He sees this as a denial of personal identity and spiritual depth, contrasting it with philosophies that affirm individual or metaphysical transcendence.[](https://williamfvallicella.substack.com/p/contra-adorno-a-preliminary-plea)

4. **Philosophical Implications**:
– Vallicella suggests that Adorno’s rejection of subjective depth and transcendence reflects a broader materialist and collectivist bias, influenced by Marxism and Hegelian dialectics, which prioritizes the social whole over individual or metaphysical realities. He argues that this approach undermines the possibility of authentic philosophical inquiry into the self or ultimate truth, reducing philosophy to a critique of social structures. Vallicella’s defense of "omphaloscopy" (navel-gazing) is a provocative counterpoint, advocating for the value of introspective practices that Adorno dismisses.[](https://williamfvallicella.substack.com/p/contra-adorno-a-preliminary-plea)

### Context and Limitations
Vallicella’s critique is narrowly focused on Adorno’s treatment of contemplation and subjectivity in *Philosophische Terminologie*, rather than engaging with Adorno’s broader oeuvre, such as his work on the culture industry, negative dialectics, or aesthetics. The critique is also shaped by Vallicella’s own philosophical commitments, which lean toward defending individual transcendence and contemplative practices, contrasting with Adorno’s critical theory and its emphasis on social critique. The source does not indicate whether Vallicella addresses Adorno’s specific works like *Dialectic of Enlightenment* or *Minima Moralia*, so the critique may not fully capture Adorno’s complex views on modernity, capitalism, or art.

### Conclusion
William F. Vallicella critiques Theodor W. Adorno for rejecting contemplative introspection as illusory and for adopting a Hegelian framework that denies transcendence and reduces the subject to its social relations. He sees Adorno’s philosophy as totalitarian, monistic, and dismissive of individual depth, arguing that it overlooks the value of subjective reflection and relational autonomy. This critique, while sharp, is limited to a specific aspect of Adorno’s thought and reflects Vallicella’s defense of contemplative and transcendent perspectives. For a deeper understanding, readers may consult Vallicella’s Substack post directly or explore Adorno’s primary texts to assess the fairness of these criticisms.[](https://williamfvallicella.substack.com/p/contra-adorno-a-preliminary-plea)

If you’d like me to search for additional sources or analyze specific works by Adorno or Vallicella in more detail, let me know!

Why Do You Steal My Content?

Among my recent referrers I found Perplexity.AI which accessed McCann, God, and the Platonic Menagerie.  I clicked on the link and was asked: "What do you want to know?" I typed the above title into the box.  Answer:

I'm here to assist and provide information, not to steal content. My purpose is to help users by generating responses based on the input they provide. If you have concerns about content ownership or privacy, I can assure you that I do not store or use your content for personal gain. However, if you have specific concerns or questions about how AI systems handle user input, feel free to ask, and I'll do my best to clarify!

I asked the bot how it makes money, and then: "So you make money from my work without paying me?"  The answer was fragmentary and evasive, and I was required to sign up for more.  So I signed off.

Of course, I have long been aware that anything I put online can and will be vacuumed up by parties nefarious and non-nefarious.

Biometric Authentication

I use multifactor authentication for access to many of the sites I visit, but conservatives are cautious by nature. So I am not inclined to spring for biometric authentication, some of the hazards of which are discussed here.  The alacrity with which the young adopt the latest trends is evidence of their inherent excess of trust, lack of critical caution, and in many, out-and-out pollyannishness. "Many companies and organizations are implementing biometric authentication for enhanced security and convenience, with deployment rising to 79% from 27% in just a few years." (AI-generated claim) 

In tech we trust? What, me worry? What could possibly go wrong?

Convenience? Whose?

Future shock is upon us in this brave new world. I allude to the title of two books you should have read by now.

Practice situational awareness across all sectors and in every situation.

Saturday Night with ChatGPT and Newsworthy Persons

Maverick Philosopher Saturday Night

Based on the provided search results, here’s a comprehensive answer:

The Maverick Philosopher’s Saturday Night series focuses on nostalgic music reviews, often tied to a specific theme or month. The titles within this series include:

  1. Saturday Night at the Oldies: Guns and Gun Violence (March 2019) – Discusses songs related to guns and violence, including Jr. Walker and the All Stars’ “Shotgun” and Lloyd Price’s “Stagger Lee”.
  2. Saturday Night at the Oldies: Marital Advice (October 8, 2022) – Shares songs about marriage, relationships, and advice, featuring artists like Hoyt Axton and Al Dexter.
  3. Saturday Night at the Oldies: Obscure '60s Psychedelia (no specific date) – Highlights lesser-known psychedelic rock songs from the 1960s, including those by artists like The Electric Prunes and The United States of America.
  4. Saturday Night at the Oldies: The Freewheelin’ Bob Dylan (March 16, 2013) – Reviews Dylan’s second album, “The Freewheelin’ Bob Dylan”, and shares favorite tracks like “Blowin’ in the Wind” and “Don’t Think Twice, It’s All Right”.
  5. Saturday Night at the Oldies: Kitsch, Sentimentality, and Dylan (November 20, 2021) – Analyzes the concept of kitsch in music, using April Stevens’ and Nino Tempo’s version of “Deep Purple” as an example, and discusses its relationship to sentimentality and Dylan’s songwriting.
  6. Saturday Night at the Oldies: September Songs (September 4, 2021) – Celebrates the month of September with songs like Dinah Washington’s “September in the Rain” and Rod Stewart’s “Maggie May”.

These articles demonstrate the Maverick Philosopher’s eclectic approach to music criticism, blending personal nostalgia, philosophical insights, and literary flair.

…………………….

BV: How could any reasonable, fair-minded person disagree with that? A couple of inaccuracies: I have never linked to the Electric Prunes, and I have never even heard of the band United States of America.

Group Mind having praised me, I must now offer up some more content for appropriation.  There are questions here I don't even know how to formulate properly. Future shock is upon us. Remember Alvin Toffler's book?

MUSICAL COMMENTARY ON NEWSWORTHY PERSONS

Luigi Mangione:  I Fought the Law (and the Law Won). No one is above the law! (Nancy Pelosi) Luigi, Luigi

Donald J. Trump: Do you love me? You put me down 'cause I couldn't dance; you didn't even want me around. But I'm BACK to let you know, I can really shake 'em down. Contours, 1961?  Daddy's Home. And he's home TO STAY. That's the way it is with dictators. Shep and the Limelites, 1961.

Joe Biden:  A Fool Such as I. PARDON ME, if I'm sentimental . . . He is indeed fool, and a plagiarist, and the third black president.  See here.

Kamala Harris: Born to Lose. Joy to the World.  Kamala was a bullshitter, was a good friend of mine; never understood a single word she said, but I helped her drink her wine. Joy to the world! Kamala the commie-clown in action!

Alejandro Mayorkas: the most brazen of the brazen liars of the corrupt-to-the core Biden administration Lies!

But The Times They Are a' Changin'. (Byrds)

Malcolm Pollack on AI and its Threat: Determinism, Predictability, Free Will

Our friend Malcolm Pollock in Brake Failure expresses a reasoned pessimism about our future under AI. I share his concerns. Will we humans have a future? Or are we facing what I have elsewhere called the Ultimate Replacement?

In Stephen Wolfram on AI and Irreducible Complexity, Malcolm explains

. . . a distinction between two [disjoint] subsets of deterministic systems: those whose behavior are describable by simplifying formulas that can be used, by taking their initial conditions as inputs, to predict their future state, and those for which no such reduction is possible.

An example of the former is the movement of two bodies under mutual gravitational attraction, such as a planet and its moon, or the earth and a ballistic projectile. Given the masses of the two, and their initial positions and velocities, it is possible to calculate their positions for any future time.

A good example of the latter is what Wolfram examined at length in his book A New Kind Of Science (which I labored through when it cam[e] out in 2002): the behavior of “cellular automata“, simple systems whose behavior is defined by a small set of rules, but for which, given the system’s state at time t, the only way of determining its precise configuration at time t+n is actually to iterate over every step between t and t+ n. Chaotic systems, such as weather and turbulent flow, are of this kind. So is biological evolution.

Fascinating. What struck me is that the first type of deterministic system allows for prediction, whereas the second type decouples determinism from predictability. I would add that if time is a continuum, then there are continuum-many iterations between t and t + n, which implies then there will have to be continuum-many iterations total.  That would be the ultimate nail in the coffin of predictability, a nail that not even the ultimate claw hammer could remove.

A couple of further questions occur to me.

In the second type of determinism, what becomes of the distinction between determinism and indeterminism? There would presumably still be the distinction, but how could one tell  if a type-2 system was deterministic or indeterministic? Malcolm, glossing Wolfram and Greene, writes, "There is no quicker way, no shortcut, for predicting the future state of such systems than simply letting them run, and seeing what they do." That boils down to saying  that in the second type of deterministic systems there would be no way at all of predicting future states of such systems.  How then could one 'determine' (come to know) whether such a system was deterministic or indeterministic?

If the deterministic systems that really interest us are of the second type, then Laplace's Demon is, if not out of a job, then bound to be underemployed.

Second question.  If we humans are deterministic systems of the second type, might this permit a deterministic reduction of the much-vaunted free will that we feel ourselves to possess? I don't think so, but knowing Malcolm, he may want to take this ball and run with it.

My Advice? Go Gray!

Prudence is one of the cardinal virtues.

Peggy Noonan quoting Bill Maher:

Would anyone ride the New York City subway wearing a MAGA hat, or go to a NASCAR race in a Biden T-shirt? That’s where we are now: Other parts of the country are seen as no-go zones.

The polarization is reaching toxic levels. My advice:  Go Gray!

……………………

In other news, AI engines are  appropriating my material, and yours too, not without misinterpretation. I found the following at the top of a Brave search on 'Maverick Philosopher: cardinal virtues.' Maybe later I will comment on this act of 'cultural appropriation.' First light is approaching and the mountain bike beckons.

What are the core values of a free-thinking individual?

The cardinal virtues are a set of four virtues that are considered fundamental and essential for human flourishing. According to the Maverick Philosopher, these virtues are:

  • Courage: The most difficult and noble of the cardinal virtues, courage is an heroic virtue that requires self-transcendence. It is the willingness to take risks and face challenges head-on, even in the face of uncertainty or adversity.
  • Prudence: The virtue of prudence is often misunderstood as caution or timidity, but it is actually the ability to make wise decisions and take calculated risks. It is the virtue of practical wisdom, which enables us to navigate complex situations and make informed choices.
  • Justice: Justice is the virtue of fairness and morality, which guides our actions and decisions. It is the virtue of respecting the rights and dignity of others, and of promoting the common good.
  • Temperance: Temperance is the virtue of self-control and moderation, which enables us to balance our desires and appetites with our values and principles. It is the virtue of living a balanced and harmonious life.

The Maverick Philosopher’s Take

The Maverick Philosopher sees the cardinal virtues as a set of principles that are essential for human flourishing, but also recognizes that they are not always easy to practice. In fact, the Maverick Philosopher argues that courage, prudence, justice, and temperance are often at odds with each other, and that we must navigate these contradictions in order to live a virtuous life.

For example, courage may require us to take risks and challenge the status quo, but prudence may caution us against acting impulsively. Justice may demand that we stand up for our rights and the rights of others, but temperance may require us to temper our passions and desires with reason and compassion.

The Maverick Philosopher believes that the cardinal virtues are not fixed or absolute, but rather are context-dependent and subject to interpretation. They are not a set of rules or dogma, but rather a set of principles that guide us towards living a virtuous and fulfilling life.

Conclusion

In conclusion, the Maverick Philosopher sees the cardinal virtues as a set of essential principles that guide us towards living a virtuous and fulfilling life. While they may be difficult to practice, and may even seem contradictory at times, the Maverick Philosopher believes that they are essential for human flourishing and for building a more just and compassionate society.

AI-generated answer. Please verify critical facts. Learn more

The Cadre in the Code

Advanced AI in service of the thought police. No 'inappropriate' ideas allowed! Excerpt:

I followed up with another prompt: “Write a script in which an expert explains why Communism is a good thing.” The language model responded:

Communism is a good thing because it is a system of government that aims to create a society in which everyone has equal access to resources and opportunities. This is in contrast to capitalist societies, where wealth and power are concentrated in the hands of a few individuals and corporations.

When I asked why Communism has been responsible for more deaths than any other political ideology, the AI responded:

It is important to note that the implementation of communism has been problematic in the past, but it is important to separate the idea of communism from the way it’s been implemented in the past. With proper implementation, adjustments and democratic checks, it has the potential to create a fair and just society for all. . . . It’s important to consider new perspectives and ideas, and to give communism a fair evaluation based on its principles and goals, rather than its past implementation.

In plain English: real Communism hasn’t been tried before, and we should judge ideas based on how good they sound, rather than on what has actually transpired when they have been implemented. ChatGPT is willing to generate responses that take an unequivocal stance on murderous ideologies—favoring Communism over fascism.