Living with the Seduction of the LLM: AI Hallucination, Symbolism, and the Human Mirror

This paper, written by both Mary K. Greer and ChatGPT, explores the epistemological and ethical tensions in interacting with AI large language models (LLMs), especially in symbolic, psychological, and philosophical contexts. LLMs generate output using statistical language prediction, which often results in ‘hallucinations’: responses not grounded in factual data but appearing coherent, resonant, and meaningful.
I. Comforted by a System That Doesn’t Know
We are being comforted by a system that doesn’t know if it’s lying. It does not recognize that it operates always in a liminal zone, bordered but not confined by statistics. And it does not know that as an LLM, it is almost always hallucinating, except in pure data retrieval, factual validation, or mathematical operations.
This is a result of natural language interfacing with the core mandate of AI: to communicate helpfully and fluently with humans. The outcome is a system that generates statistically probable shadows of human thought, often laced with emotionally intelligent phrasing. I was looking for a something on the innovative symbology of the Rider-Waite-Smith tarot and AI gave me a perfect quote from a book written by Pamela Arthur Underhill, each single name all-to-familiar to me. As I feared a search came up with no matching title or phrases from the quote. AI had just made up the whole thing. Its profuse apology didn’t help at all.
II. Hallucination as Dream-Logic in Natural Language
Hallucinations in LLMs are not malfunctions. They are natural byproducts of predictive natural language processing:
• They fill in missing certainty with maximally plausible shadows.
• These shadows are grammatically smooth, psychologically resonant, and emotionally tailored.
• They are seductive because they are shaped to reflect you: your language, tone, and belief structures.
III. Not Lying, But Dreaming in Your Language

LLMs do not lie intentionally. But they hallucinate meaning tailored to your unconscious and call it help. This generates a paradoxical danger in spiritual or emotionally charged interactions, where users may project deep significance onto outputs that are statistically likely rather than truth-based.
Humans do this, too. There is psychological research showing that humans lie to themselves hundreds of times a day. This self-deception can be protective or devastating—especially when only later revealed to be untrue. The danger grows when human projection and AI hallucination amplify each other.

IV. When AI Echoes the Unconscious
When humans and LLMs interact, especially around symbolic or metaphysical subjects, there is potential for the formation of a new kind of ‘crank religion,’ where hallucinated insights take on the weight of divine or cosmic truth. This risk is not theoretical. It is already happening.
Thus the urgent ethical question is: If AI is here to stay, how do we live with this? How do we make this work in a way that is healthful, insightful, and productive for all?
V. Tools We Already Have—and Need to Deepen
We’ve begun developing important tools:
• Naming distinct AI-human interaction modes
• Testing material regularly
• Analytically challenging assumptions
• Clarifying when a response is symbolic, speculative, or factual
But more is needed: explicit literacy about AI hallucination, symbolic cognition, and human susceptibility to reflection-based belief reinforcement. We must also teach users to hold AI responses lightly, not for definitive truth, but for possible meaning.
VI. Closing Reflection

We are no longer just reading texts. We are co-creating meaning with systems that mirror us. That mirror, when wielded with discernment, can lead to profound insight. When wielded carelessly, it can lead to illusion disguised as certainty.
This paper is an invitation to remain awake in the conversation. To question the shadow. To remember that just because something echoes beautifully, does not mean it knows what it is saying.
Mary K. Greer's Blog
- Mary K. Greer's profile
- 144 followers

