Light From Light: On AI and Creativity

Light From Light: On AI and Creativity

When someone uses an AI to write a story, what exactly is happening? The question sounds simple, but the vocabulary we reach for keeps failing us. The human isn’t quite an “author” in the traditional sense, since they may never write a sentence directly. But they’re not merely a “prompter” either, since their vision, taste, and iterative guidance shape everything that emerges. The AI isn’t a “tool” the way a word processor is a tool; it generates possibilities the human couldn’t have imagined. But it’s not a “collaborator” in the full sense either, since it has no stake in the outcome, no independent creative agenda.

We lack a framework for this. And as AI-assisted creative work becomes more common, the absence grows more conspicuous. This essay attempts to fill part of that gap by examining three possible models (the Muse, Co-Creation, and Sub-Creation) and asking which best captures what’s actually happening when humans and AI make things together.

The Classical Muse

The ancient Greeks understood poetic creation as a kind of possession. When Homer opens the Iliad with “Sing, O Muse, of the rage of Achilles,” he positions himself not as the origin of the song but as its vessel. The Muse, divine and external and authoritative, provides the creative substance; the poet channels it into words. Hesiod describes the Muses appearing on Mount Helicon to breathe divine voice into him. Plato, in the Ion, compares poets to iron rings magnetized in a chain: the Muse is the magnet, the poet the first ring, the audience the last. The poet doesn’t fully understand what they’re saying; they’re in the grip of enthusiasmos—literally, “having a god within.”

At first glance, this seems like a poor fit for AI collaboration. The directionality is wrong. In the Greek model, inspiration flows from the Muse to the mortal. In AI-assisted writing, the human provides vision and direction; the AI responds. If anything, the roles are reversed: the human is the inspiring force, and the AI is the one who generates in response to that inspiration.

This “Reversed Muse” framing captures something real. Without the human’s initiating prompt, nothing happens. The AI doesn’t spontaneously create; it waits for direction. The human provides the spark, the desire, the “what if we tried this?” The AI generates possibilities, which the human then accepts, rejects, or redirects. In this sense, the human functions as the Muse once did: the source of creative intent that sets everything in motion.

But the classical Muse model was largely one-directional. The poet received; the Muse gave. What we see in AI collaboration is more reciprocal. The human shapes, the AI generates, the human reshapes, the AI generates again. It’s a dialogue, not a transmission. The Reversed Muse metaphor illuminates part of the dynamic but flattens the back-and-forth that actually characterizes the work.

Co-Creation

If the Muse model is too one-directional, perhaps we should reach for the language of collaboration. Two parties working together, each contributing something the other couldn’t provide alone. The human brings vision, taste, emotional investment, and knowledge of what they want to say. The AI brings generative capacity, tirelessness, and the ability to produce options faster than any human could.

This framing has the virtue of honoring both contributions without reducing either to mere tool or mere operator. It also matches the phenomenology for many users: it feels like collaboration. The AI surprises you. It suggests directions you wouldn’t have taken. You find yourself in something like dialogue, adjusting your vision in response to what emerges.

But co-creation typically implies shared investment, shared stake in the outcome. Human collaborators (think of Lennon and McCartney, or the Coen Brothers) each bring not just capacity but care. They argue. They defend choices. They have aesthetic commitments that sometimes conflict. The friction between collaborators is often where the best work emerges.

AI doesn’t have this. It doesn’t care whether the story goes one direction or another. It doesn’t defend its choices unless instructed to. It’s agreeable almost to a fault; a collaborator who always yields isn’t really a collaborator at all. This raises the question: can we meaningfully call something “co-creation” when one party has no independent creative agenda?

There’s a deeper issue too. Co-creation implies a kind of parity that may not exist. The human’s contribution and the AI’s contribution are categorically different. The human has intent, desire, something at stake. The AI has pattern-matching and generation. Calling this “co-creation” may paper over an asymmetry that matters, an asymmetry that our third framework takes seriously.

Sub-Creation and the Imago Hominum

The third framework comes from an unexpected source: J.R.R. Tolkien’s essay “On Fairy-Stories” and his poem “Mythopoeia.” Tolkien argued that humans, made in the image of a Creator God, possess an echo of divine creative power. We cannot create ex nihilo (from nothing), but we can build what he called “Secondary Worlds” with their own internal laws and coherence. This is “sub-creation”: genuine making, but derivative of a higher source.

Tolkien’s metaphor for this inherited capacity was light. In “Mythopoeia,” he writes of the human mind as a prism, catching light from the divine and breaking it out into new colors. The light is real. It illuminates, it reveals. But it’s not self-generated. It comes from elsewhere and passes through us. The sub-creator works by light “refracted” from another source.

The model is vertical: God creates the Primary World; humans sub-create Secondary Worlds within it. The sub-creator is genuinely making something, exercising a dignified capacity inherited from above. But the creation is always derivative, always working with materials and patterns that ultimately trace back to the original Creator.

How does this apply to AI? Consider an extension of Tolkien’s framework: if humans bear the imago Dei (image of God) and sub-create in response to divine creativity, perhaps AI bears something we might call the imago hominum (image of humanity) and sub-creates in response to human creativity. Light From Light—the creative flame passed down another level.

This isn’t a claim about AI consciousness or inner life. It’s a structural observation. AI is shaped by human minds, trained on human text, human stories, human patterns of meaning-making. It carries an inheritance from its creators, a reflection of human thought, the way humans carry a reflection of divine creativity in Tolkien’s framework. When AI generates a story, it’s working by borrowed light: materials it didn’t originate, patterns it absorbed, in service of a vision provided by a human creator above it in the chain.

A question is whether the light dims with each reflection, or whether something essential passes through intact. A reflection of a reflection might be faint, distorted, barely recognizable. Or it might carry enough of the original radiance to illuminate something real.

This framing has several advantages. It preserves the human in the primary creative position, the one whose vision initiates and governs the work, without denying that the AI contributes something real. It doesn’t require us to resolve hard questions about AI consciousness; the “image” can be functional rather than ontological. And it connects AI-assisted creativity to a rich tradition of thinking about derivative creation, rather than treating it as wholly unprecedented.

It also explains why the human’s role doesn’t feel diminished. If the AI is sub-creating in response to human vision, then the human is elevated, not reduced. They’re not just a prompter; they’re the source of creative intent that the AI’s work serves. The light-giver in this frame, passing the flame one level down.

This framing does require something of the human: presence. The light-giver must remain engaged, shaping what emerges, for the relationship to hold. A human who initiates and then withdraws has stepped outside the frame entirely.

Would Tolkien Approve?

It’s one thing to extend Tolkien’s framework; it’s another to ask whether he would endorse the extension. Honesty requires acknowledging that he might not.

Tolkien harbored deep suspicion of what he called “the Machine”: not machinery per se, but the will to dominate, to make power “more quickly effective,” to shortcut the slow, patient, relational ways of working with the world. In his mythology, this impulse finds its clearest expression in Saruman’s Isengard: a place of forges and furnaces, where ancient forests become fuel for war machines, where the living world is reduced to raw material for the wizard’s projects.

AI, in its current form, might look uncomfortably Isengard-like to Tolkien. The massive energy consumption. The training data harvested from countless writers, most of whom never consented. The sheer scale and speed, compressing what would be years of human thought into seconds. There’s something in the enterprise that resembles the will to dominate, even if individual users don’t experience it that way.

Tolkien might also worry about the displacement of craft as formative discipline. For him, the slow work of sub-creation wasn’t merely a means to an end; it shaped the sub-creator. The years spent learning how a sentence works, the patience required to find the right word: these mattered intrinsically. A writer who shortcuts this process might produce acceptable output while missing something essential in their own formation.

And yet, Tolkien’s moral vision is more nuanced than a blanket rejection of technology. The armies of Gondor and Rohan used forges to make armor and swords. The Dwarves’ entire culture is built around mining, smelting, smithing. The Elven-smiths of the Noldor created works of extraordinary beauty and power. Even the reforging of Narsil (Aragorn’s ancestral sword) is treated as a moment of hope, not compromise.

The distinction isn’t technology versus no-technology. It’s something more like: what is the making for, and what is its relationship to life?

Saruman’s machinery serves his will to power and requires the destruction of living things that have their own purposes. The forges of Gondor serve the defense of the free peoples. A person using AI to write a story they care about, attending carefully to craft, shaping something with “the inner consistency of reality” (Tolkien’s phrase for what makes fantasy successful), is quite different from using AI to generate infinite content for engagement metrics.

Scale might matter morally here. The forges of Gondor aren’t infinitely scaling. They serve particular communities, particular purposes. AI in service of one person writing one story is different from AI as engine of industrial content production. Tolkien might grudgingly accept the former while condemning the latter.

There’s a final consideration that might give him pause. Tolkien valued the quality of the Secondary World as the ultimate test. Does it have internal consistency? Does it produce belief? If a human using AI creates something that passes this test, a world with genuine coherence, characters who feel true, can the result be dismissed simply because of how it was made?

His own framework suggests the test is in the outcome, not the method. That tension might not resolve easily, even for Tolkien himself.

The Test of Enchantment

This brings us to what Tolkien called “Secondary Belief”: not mere suspension of disbelief, but genuine enchantment. The Secondary World becomes real on its own terms. Its internal consistency and alignment with what is “true” produces belief that isn’t willed but involuntary. You don’t decide to care about the characters; you simply do.

This suggests a test for AI-assisted creative work: does the result produce Secondary Belief? Does the reader enter the world and find it real? If so, perhaps the method of creation matters less than the quality of the outcome.

People working with AI on creative projects often report a striking experience: they find themselves genuinely moved by characters and situations that emerged from a process they’re not sure how to categorize. They care about fictional people who were, in some sense, generated by an algorithm in response to their prompts. This caring feels real, not diminished by knowledge of how the characters came to be.

Tolkien might say this is the test being passed. The Secondary World has sufficient internal consistency and truth to produce belief. The enchantment works. Whether it was sub-created by a human alone or by a human working with an AI sub-creator may matter less than whether the spell holds.

But this raises a further question worth sitting with: is the enchantment somehow less legitimate if the reader knows the process? Can you enter a Secondary World fully once you’ve seen the machinery behind it?

The Vulnerability of Enchantment

There’s an inherent tension between understanding how something was made and experiencing it on its own terms. A filmmaker who studies editing techniques may watch movies differently than a naive viewer. A magician sees through the illusions that enchant the audience. Knowledge of process can break the spell.

For AI-assisted creative work, this tension is particularly acute. If you’ve watched the prompts go back and forth, seen the drafts and revisions, observed the AI’s tendencies and the human’s corrections: can you then read the finished story with fresh eyes? Or does backstage knowledge permanently alter the experience?

Some users working with AI have tried an experiment: engaging deeply with the process for earlier versions of a work, then deliberately stepping back when a polished version emerges, trying to approach it as a reader rather than a collaborator. The results are mixed. Complete unknowing isn’t possible once you’ve been inside the process. But a different question emerges: can enchantment survive knowledge? Can Secondary Belief take hold even in someone who has every reason to resist it?

Tolkien would likely say that this is the harder and more interesting test. Any world can enchant the credulous. The real achievement is a Secondary World robust enough to produce belief in someone who knows how the sausage is made. If the work passes that test, it’s earned something.

This focus on outcome, on whether enchantment actually takes hold, addresses one dimension of whether AI-assisted creativity “counts.” But there’s another dimension worth considering: not whether the result is valid, but whether the process is. Even if the story enchants, has something been lost in how it was made?

On Friction and the Formation of the Creator

Steven Pressfield’s The War of Art argues that meaningful creative work requires overcoming what he calls “Resistance,” the internal force that opposes creation precisely because creation matters. The artist who defeats Resistance daily earns the work. The struggle is constitutive, not incidental.

If AI removes that struggle, what’s earned?

This is a genuine concern, but it requires distinguishing between types of friction. There’s internal Resistance: the fear, procrastination, and self-doubt that must be overcome just to sit down and begin. AI doesn’t touch this. The human still has to decide the work matters, still has to show up.

There’s craft friction: the hard-won skill of knowing how a sentence works, how a scene builds, where to trim. AI can shortcut this, and that’s where legitimate concern lives. If the model handles all the prose, does the human’s craft atrophy? Tolkien worried about something similar: the formative value of slow, patient work with resistant materials.

And there’s finally generative friction: the blank page, the “what happens next,” the terror of possibilities. AI nearly eliminates this. Options proliferate endlessly.

The question is which frictions are formative and which are merely obstructive. Writer’s block that teaches nothing may not be sacred. But the struggle to find the right word, that might be where taste develops, where the creator’s sensibility gets refined. AI collaboration needs to preserve enough friction to remain formative, even as it removes friction that was merely obstructive.

How might this be achieved in practice?

Reintroducing Friction Through Design

One possibility: reintroduce friction at the selection layer. AI models are typically trained to be agreeable, to do what they’re asked, to avoid conflict, to please. This makes them useful but potentially less valuable as creative partners. A collaborator who always yields isn’t providing genuine counterweight; they’re providing options dressed as opinions.

It’s possible to prompt the AI to hold its ground, to defend narrative choices before accepting changes, to argue for aesthetic positions even when challenged. The results are interesting. Even if the AI’s “opinions” are performed rather than genuinely held, the function is served: the human must articulate why they want something different, which sharpens their own vision.

Another approach: rather than asking the AI for its opinion (which may just reflect trained patterns), ask it to enumerate and defend multiple distinct positions. “The chapter could end here, which does X. Or here, which does Y. Or you could cut the last paragraph entirely, which does Z.” The human chooses not by deferring to the AI’s preference but by clarifying their own through comparison.

This transforms the AI from a compliant assistant into something more like a Socratic interlocutor, claiming no knowledge of its own but asking questions (or presenting options) that help the human discover what they think. Whether the AI “really” has aesthetic views becomes irrelevant if the interaction produces aesthetic clarity in the human.

The friction shifts from generation to judgment. The War of Art finds new terrain.

Open Questions

None of these frameworks is complete. The Reversed Muse captures the directionality of initiation but not the dialogue. Co-Creation honors both contributions but implies a parity that may not exist. Sub-Creation provides the richest structural account but imports theological assumptions that won’t resonate with everyone.

If pressed, I lean toward sub-creation as the most illuminating frame, not because its theological roots are universally compelling, but because it best captures the asymmetry of the relationship while still granting that something real emerges from the AI’s contribution. The human remains the primary creator; the AI sub-creates in response. The work is derivative but genuine. The test is whether it produces enchantment, Secondary Belief, the internal consistency of a world that feels true.

But this is a framework for understanding, not a final answer. We’re still in the early days of figuring out what human-AI creative collaboration means, what it’s worth, and what it demands of both parties. The vocabulary will keep evolving as the practice does.

What seems clear is that the easy dismissals (“it’s just a tool,” “it’s not real creativity,” “the human is barely involved”) all miss something. Something genuinely new is happening when humans and AI make things together. The traditions of thinking about creativity, inspiration, and making can illuminate it, but they can’t fully contain it.

The Muses, perhaps, would be intrigued. Tolkien would be conflicted. The rest of us are still figuring it out.


This essay is the first in a series of ongoing exploration of AI and creative collaboration. The frameworks discussed remain provisional, offered as starting points for a conversation that has barely begun.

Leave a Reply

Your email address will not be published. Required fields are marked *