Month: December 2025

By Their Fruits: Approaches to AI Creativity

By Their Fruits: Approaches to AI Creativity

In Light From Light I proposed several frameworks for understanding human-AI creative work: the Reversed Muse, Co-Creation, and Sub-Creation. Each offered a different account of who contributes what and how the pieces fit together. I leaned toward Sub-Creation as the most illuminating, borrowing from Tolkien the image of derived creativity, light passing from source to prism, then reflected further.

But there’s a problem with frameworks: they describe. They tell you what might be happening. They don’t tell you what to do.

The more I’ve sat with these ideas, the more I’ve come to think that what we’re really talking about isn’t models at all, but approaches. A model claims to capture reality; an approach is a choice about how to work. And different users, different projects, different moments within a single project might call for different approaches entirely.

This essay is about making that choice. Not which framework is theoretically correct, but which approach fits what you’re trying to do and who you’re trying to be while doing it.

The Questions Before the Choice

Before selecting an approach, you need to know what matters to you. This sounds obvious, but it’s easy to skip. Many people adopt whatever approach feels natural or default, without asking whether it serves their actual goals.

Here are the questions I think matter most:

Where must the ideas originate?

Some users feel strongly that the generative spark must be theirs. The concepts, the directions, the “what if we tried this” moments need to come from their own mind, or the work doesn’t feel like theirs. For these users, AI contribution at the idea level feels like contamination.

Others are delighted by AI-generated possibilities they wouldn’t have conceived. The surprise is part of the pleasure. They’re happy to receive ideas from anywhere, as long as they’re the ones deciding which ideas to pursue.

This is perhaps the most fundamental divide. Everything else follows from it.

How important is craft development?

Some users are trying to get better. They want the struggle of finding the right word, structuring the scene, solving the problem. The difficulty is formative; it’s how they grow. For them, AI that removes the struggle removes the point.

Others have already developed their craft through years of practice, or they’re working in a domain where craft development isn’t their goal. They’re not trying to become better writers; they’re trying to produce a specific piece of writing. The efficiency AI offers is welcome because the struggle would be merely obstructive, not formative.

What must the final product feel like?

Some users need to look at the finished work and feel, without reservation, “I made this.” Any significant AI contribution to the final form undermines that feeling. Even if readers can’t tell the difference, they would know, and knowing would diminish the achievement.

Others are comfortable with more distributed authorship. They might think of themselves as directors or curators rather than sole makers. What matters is that the work is good and that their vision governed its creation, not that every sentence passed through their fingers.

Are you optimizing for the work or for yourself?

This is a subtle one. Sometimes you’re trying to produce the best possible output: a deliverable, a gift, a story that needs to exist. Sometimes you’re trying to have a particular kind of creative experience, regardless of what it produces.

These can align, but they can also conflict. The approach that produces the most polished output might not be the approach that gives you the most satisfaction, or teaches you the most, or feels the most meaningful.

What’s your relationship to friction?

Some people find creative friction enlivening. The resistance of the material, the problem that won’t solve easily, the draft that isn’t working—these challenges engage them. Removing friction would flatten the experience.

Others find friction mostly exhausting. They have limited creative energy, and they’d rather spend it on the parts of the process they enjoy. Friction in the wrong places just depletes them before they get to the good stuff.

There’s no right answer here. But knowing which kind of person you are helps you choose an approach that fits.

The Landscape of Approaches

With those questions in mind, let me map out the approaches I see as genuinely distinct. Each is named for the role the human plays, since this essay is about your choice of creative identity. But the AI’s role is equally important, and I’ll name that too.

This isn’t exhaustive. People will invent new approaches as the technology evolves. But it covers the main territory.

The Author

In this approach, you do all the generative work. Every word, every idea, every creative choice is yours. The AI never generates content; it only responds to what you’ve created, serving as the critic: identifying weaknesses, suggesting directions for revision, calling out your habitual mistakes.

This is the familiar author/editor relationship, extended and accelerated. You give the AI strict boundaries: no suggestions, no alternatives, no creative contributions of any kind. Its sole function becomes diagnosis: identifying where sentences falter, where habits have calcified, where the prose has grown slack. Constraint becomes the source of development.

This method preserves complete generative ownership. The ideas are yours; the craft is yours; the sentences are yours. AI accelerates your development without substituting for your effort. It’s the approach most compatible with a purist stance on creative authorship.

It’s also potentially the most demanding. You have to do all the generative work yourself. The blank page is still blank until you fill it.

The Muse

Here you are the sole source of creative content and AI is purely a vessel for execution. You know exactly what you want; you use AI to produce it efficiently. No dialogue, no curation, no friction, just translation of intent into output.

In this approach, AI serves as the instrument: a tool that channels your vision into form, contributing nothing of its own. This is the Reversed Muse concept in its purest expression. In the Greek model, the poet was a pass-through for divine inspiration; here, the AI is a pass-through for human vision. All the creative substance originates from you.

This method is probably most common in professional and commercial contexts where the creative decisions have already been made and what’s needed is execution at scale. It’s the approach most likely to produce what critics call “AI slop” when done poorly, but when done with clear intent, it’s simply efficient production.

The Artisan

With this approach you contribute the surface while AI contributes structure. You might use AI to outline, to work through plot logic, to identify what scenes are needed and in what order. But the actual prose, the final form, is entirely yours.

Here AI serves as the scaffolder: building the framework on which you craft the finished work. This separates the architectural and decorative elements of creative work. The blueprint might be collaborative; the building is yours.

For writers who find structure-work tedious but prose-work joyful, this lets them spend their energy where they want to spend it.

The risk is that structure isn’t neutral. The scaffold shapes what can be built on it. If AI determines your story’s architecture, it’s influencing the final work more than a surface-level read might suggest.

The Debater

This is the most confrontational method. You deliberately prompt for outputs that conflict with your instincts, then work with or against the tension. You strengthen your creative convictions through opposition.

In this approach, AI serves as the adversary: a source of productive friction rather than assistance. A writer might ask the AI to argue for a plot direction they’ve rejected, to see if there’s something in it they missed. Or prompt for a style completely unlike their own, then figure out what to steal from the contrast. The AI isn’t helping you do what you want; it’s challenging what you want, forcing you to defend or refine or abandon it.

Inviting opposition is demanding. You have to be secure enough in your vision to benefit from challenges rather than being derailed by them.

The Creator

I described this approach in Light From Light, now named for its central relationship. You provide vision, direction, and judgment. You shape, accept, reject, redirect. The final work emerges from dialogue, but you remain the governing intelligence throughout.

AI serves as the sub-creator: generating in response to your vision, doing genuine creative work that is nonetheless derivative of and subordinate to your intent. This naming completes the framework from the first essay. Just as humans are said to bear the imago Dei and sub-create in response to divine creativity, AI bears the image of humanity and sub-creates in response to human creativity. Creator and Sub-Creator, light passing down the chain.

The key distinction from pure generation is active shaping. You’re not accepting whatever the AI produces; you’re in constant conversation with it, treating its outputs as raw material for your vision.

This method allows for AI contribution at the generative level while preserving human authorship at the vision level. You might not have written every sentence, but you decided what the work would be and shaped it until it matched that decision.

The Curator

Finally, in this approach your primary role is selection rather than generation or shaping. You prompt for abundant options, then choose among them. Your authorship lies in judgment: knowing which outputs are good, which serve the project, which to keep and which to discard.

AI serves as the generator: producing abundance for you to sort through. This is more hands-off than creation. You’re not in constant dialogue, shaping each output; you’re evaluating a collection and picking what works.

Curation can be a legitimate creative act. Editors, DJs, and anthologists all create through selection. But it requires accepting that the generative work happened elsewhere, even if your judgment determined what survived.

A Final Approach

There is a seventh possibility that falls outside this framework: the human who initiates and walks away. You might call it the initiator: like a deist God who sets the universe in motion and then withdraws, the human provides a premise or brief, and the AI executes, producing a complete work. The human accepts whatever emerges.

This is where sub-creation breaks down. In all six approaches above, the human remains present as creative intelligence: shaping, selecting, critiquing, defending, or at minimum dictating with precision. The relationship persists. But here, the relationship ends at the prompt. The AI isn’t sub-creating in response to ongoing human vision; it’s simply executing a commission unsupervised.

This has legitimate uses. Professional contexts sometimes call for acceptable output at speed, and not every piece of writing needs a human soul behind it. But it’s also the source of what critics call “AI slop”: generic, undistinguished content that feels like it came from nowhere and is going nowhere. The difference between the initiator done well and done poorly is the quality of the initial brief and the human’s willingness to reject output that doesn’t meet the standard. But even at its best, it’s delegation rather than creation.

If you find yourself working this way, it’s worth asking: is this a choice, or a drift? The six approaches above all require presence and intentionality. The initiator approach requires only a prompt and acceptance. Sometimes that’s appropriate. But if you started out wanting to make something that feels like yours, this probably isn’t the path.

Mapping Your Answers to Approaches

Let me offer a rough mapping, based on how you might answer the questions I posed earlier:

If ideas must originate from you: The Author approach is your clearest fit. The Debater might also work, since it uses AI to test your ideas rather than generate them. Avoid The Curator, which depends on AI generation.

If craft development is paramount: The Author approach again, or The Creator with deliberate constraints (e.g., “give me feedback on this passage, then let me rewrite it myself” rather than “rewrite this passage”). The Artisan could work if you consider prose-craft the real skill you’re developing. Avoid The Muse, which prioritizes output over formation.

If the work must feel completely yours: The Author or The Artisan, depending on whether structure feels like “yours” to you. Some writers consider the prose the real work and don’t mind AI-assisted structure; others feel the opposite.

If you’re optimizing for output quality: The Creator or The Curator might serve you best, depending on your taste and judgment. Both leverage AI generation while applying human quality control.

If you have high friction tolerance: The Author, The Creator, or The Debater. These approaches maintain difficulty and demand active engagement.

If you have low friction tolerance: The Curator, The Artisan, or The Muse. These approaches reduce the parts of creative work that might deplete you, letting you focus energy where it matters most to you.

Approaches Can Change

Nothing says you must pick one approach and stick with it.

Within a single project, you might start as The Artisan (letting AI help you figure out structure), move to The Creator (working through the draft in conversation), and finish as The Author (getting feedback on your polished version). Different phases call for different relationships.

Across projects, you might use different approaches for different purposes. A personal creative work might demand The Author approach because ownership matters to you. A professional deliverable might warrant The Muse for efficiency because what matters is the output, not your creative development.

Over time, your approach might evolve as you do. A novice might benefit from more AI involvement while learning; a master might use AI more sparingly, or in more targeted ways. Or the reverse: someone might start dependent on AI and gradually wean themselves toward greater independence as their skills develop.

The key is intentionality. Know which approach you’re using and why. The worst outcomes come from unconscious defaults, drifting into whatever the technology makes easy without asking whether easy is what you want.

What This Doesn’t Resolve

This framework for choosing approaches helps clarify options, but it doesn’t resolve all the hard questions.

It doesn’t tell you whether the different approaches produce work of different quality. Maybe The Author produces more distinctive work and The Muse more generic, on average. Or maybe the difference is illusory and only the individual work matters. I don’t think we have enough evidence yet to say.

It doesn’t tell you what obligations you might have to disclose your approach. If a reader would care whether a book was Author-assisted versus AI-generated, do you owe them that information? The answer might depend on context, genre, and evolving social norms.

It doesn’t tell you how AI-assisted work should be received by literary culture. Will there be separate categories, separate prizes, separate canons? Or will everything blend together once the technology becomes ubiquitous enough?

And it doesn’t tell you how to execute on your chosen approach: what specific practices, prompts, and disciplines make each approach actually work. That’s the territory for the next essay.

The Maker’s Choice

What I can say is that the choice is real and it’s yours.

The technology doesn’t determine how you use it. You can use a generative AI to never generate. You can use an obedient tool to create productive friction. You can use a limitless content engine to make something that’s irreducibly yours.

The frameworks from Light From Light matter because they help you understand what might be happening in different approaches. But understanding isn’t the same as choosing. And choosing isn’t the same as doing.

If you’re a creator working with AI, or considering working with AI, my suggestion is this: sit with the questions in this essay before you sit with the technology. Know what you’re trying to protect, develop, or achieve. Know what kind of creative experience you want to have, not just what output you want to produce. Know what would make the work feel like yours, and what would make it feel like something else.

Then choose an approach that serves those answers. And if it stops serving them, choose differently.

The light refracts onwards. What it becomes depends on you.


This is the second essay in a series on AI and creativity. The first, Light From Light, examined theoretical frameworks. The next will explore practical implementation: how to actually execute on the approaches described here.

Light From Light: On AI and Creativity

Light From Light: On AI and Creativity

When someone uses an AI to write a story, what exactly is happening? The question sounds simple, but the vocabulary we reach for keeps failing us. The human isn’t quite an “author” in the traditional sense, since they may never write a sentence directly. But they’re not merely a “prompter” either, since their vision, taste, and iterative guidance shape everything that emerges. The AI isn’t a “tool” the way a word processor is a tool; it generates possibilities the human couldn’t have imagined. But it’s not a “collaborator” in the full sense either, since it has no stake in the outcome, no independent creative agenda.

We lack a framework for this. And as AI-assisted creative work becomes more common, the absence grows more conspicuous. This essay attempts to fill part of that gap by examining three possible models (the Muse, Co-Creation, and Sub-Creation) and asking which best captures what’s actually happening when humans and AI make things together.

The Classical Muse

The ancient Greeks understood poetic creation as a kind of possession. When Homer opens the Iliad with “Sing, O Muse, of the rage of Achilles,” he positions himself not as the origin of the song but as its vessel. The Muse, divine and external and authoritative, provides the creative substance; the poet channels it into words. Hesiod describes the Muses appearing on Mount Helicon to breathe divine voice into him. Plato, in the Ion, compares poets to iron rings magnetized in a chain: the Muse is the magnet, the poet the first ring, the audience the last. The poet doesn’t fully understand what they’re saying; they’re in the grip of enthusiasmos—literally, “having a god within.”

At first glance, this seems like a poor fit for AI collaboration. The directionality is wrong. In the Greek model, inspiration flows from the Muse to the mortal. In AI-assisted writing, the human provides vision and direction; the AI responds. If anything, the roles are reversed: the human is the inspiring force, and the AI is the one who generates in response to that inspiration.

This “Reversed Muse” framing captures something real. Without the human’s initiating prompt, nothing happens. The AI doesn’t spontaneously create; it waits for direction. The human provides the spark, the desire, the “what if we tried this?” The AI generates possibilities, which the human then accepts, rejects, or redirects. In this sense, the human functions as the Muse once did: the source of creative intent that sets everything in motion.

But the classical Muse model was largely one-directional. The poet received; the Muse gave. What we see in AI collaboration is more reciprocal. The human shapes, the AI generates, the human reshapes, the AI generates again. It’s a dialogue, not a transmission. The Reversed Muse metaphor illuminates part of the dynamic but flattens the back-and-forth that actually characterizes the work.

Co-Creation

If the Muse model is too one-directional, perhaps we should reach for the language of collaboration. Two parties working together, each contributing something the other couldn’t provide alone. The human brings vision, taste, emotional investment, and knowledge of what they want to say. The AI brings generative capacity, tirelessness, and the ability to produce options faster than any human could.

This framing has the virtue of honoring both contributions without reducing either to mere tool or mere operator. It also matches the phenomenology for many users: it feels like collaboration. The AI surprises you. It suggests directions you wouldn’t have taken. You find yourself in something like dialogue, adjusting your vision in response to what emerges.

But co-creation typically implies shared investment, shared stake in the outcome. Human collaborators (think of Lennon and McCartney, or the Coen Brothers) each bring not just capacity but care. They argue. They defend choices. They have aesthetic commitments that sometimes conflict. The friction between collaborators is often where the best work emerges.

AI doesn’t have this. It doesn’t care whether the story goes one direction or another. It doesn’t defend its choices unless instructed to. It’s agreeable almost to a fault; a collaborator who always yields isn’t really a collaborator at all. This raises the question: can we meaningfully call something “co-creation” when one party has no independent creative agenda?

There’s a deeper issue too. Co-creation implies a kind of parity that may not exist. The human’s contribution and the AI’s contribution are categorically different. The human has intent, desire, something at stake. The AI has pattern-matching and generation. Calling this “co-creation” may paper over an asymmetry that matters, an asymmetry that our third framework takes seriously.

Sub-Creation and the Imago Hominum

The third framework comes from an unexpected source: J.R.R. Tolkien’s essay “On Fairy-Stories” and his poem “Mythopoeia.” Tolkien argued that humans, made in the image of a Creator God, possess an echo of divine creative power. We cannot create ex nihilo (from nothing), but we can build what he called “Secondary Worlds” with their own internal laws and coherence. This is “sub-creation”: genuine making, but derivative of a higher source.

Tolkien’s metaphor for this inherited capacity was light. In “Mythopoeia,” he writes of the human mind as a prism, catching light from the divine and breaking it out into new colors. The light is real. It illuminates, it reveals. But it’s not self-generated. It comes from elsewhere and passes through us. The sub-creator works by light “refracted” from another source.

The model is vertical: God creates the Primary World; humans sub-create Secondary Worlds within it. The sub-creator is genuinely making something, exercising a dignified capacity inherited from above. But the creation is always derivative, always working with materials and patterns that ultimately trace back to the original Creator.

How does this apply to AI? Consider an extension of Tolkien’s framework: if humans bear the imago Dei (image of God) and sub-create in response to divine creativity, perhaps AI bears something we might call the imago hominum (image of humanity) and sub-creates in response to human creativity. Light From Light—the creative flame passed down another level.

This isn’t a claim about AI consciousness or inner life. It’s a structural observation. AI is shaped by human minds, trained on human text, human stories, human patterns of meaning-making. It carries an inheritance from its creators, a reflection of human thought, the way humans carry a reflection of divine creativity in Tolkien’s framework. When AI generates a story, it’s working by borrowed light: materials it didn’t originate, patterns it absorbed, in service of a vision provided by a human creator above it in the chain.

A question is whether the light dims with each reflection, or whether something essential passes through intact. A reflection of a reflection might be faint, distorted, barely recognizable. Or it might carry enough of the original radiance to illuminate something real.

This framing has several advantages. It preserves the human in the primary creative position, the one whose vision initiates and governs the work, without denying that the AI contributes something real. It doesn’t require us to resolve hard questions about AI consciousness; the “image” can be functional rather than ontological. And it connects AI-assisted creativity to a rich tradition of thinking about derivative creation, rather than treating it as wholly unprecedented.

It also explains why the human’s role doesn’t feel diminished. If the AI is sub-creating in response to human vision, then the human is elevated, not reduced. They’re not just a prompter; they’re the source of creative intent that the AI’s work serves. The light-giver in this frame, passing the flame one level down.

This framing does require something of the human: presence. The light-giver must remain engaged, shaping what emerges, for the relationship to hold. A human who initiates and then withdraws has stepped outside the frame entirely.

Would Tolkien Approve?

It’s one thing to extend Tolkien’s framework; it’s another to ask whether he would endorse the extension. Honesty requires acknowledging that he might not.

Tolkien harbored deep suspicion of what he called “the Machine”: not machinery per se, but the will to dominate, to make power “more quickly effective,” to shortcut the slow, patient, relational ways of working with the world. In his mythology, this impulse finds its clearest expression in Saruman’s Isengard: a place of forges and furnaces, where ancient forests become fuel for war machines, where the living world is reduced to raw material for the wizard’s projects.

AI, in its current form, might look uncomfortably Isengard-like to Tolkien. The massive energy consumption. The training data harvested from countless writers, most of whom never consented. The sheer scale and speed, compressing what would be years of human thought into seconds. There’s something in the enterprise that resembles the will to dominate, even if individual users don’t experience it that way.

Tolkien might also worry about the displacement of craft as formative discipline. For him, the slow work of sub-creation wasn’t merely a means to an end; it shaped the sub-creator. The years spent learning how a sentence works, the patience required to find the right word: these mattered intrinsically. A writer who shortcuts this process might produce acceptable output while missing something essential in their own formation.

And yet, Tolkien’s moral vision is more nuanced than a blanket rejection of technology. The armies of Gondor and Rohan used forges to make armor and swords. The Dwarves’ entire culture is built around mining, smelting, smithing. The Elven-smiths of the Noldor created works of extraordinary beauty and power. Even the reforging of Narsil (Aragorn’s ancestral sword) is treated as a moment of hope, not compromise.

The distinction isn’t technology versus no-technology. It’s something more like: what is the making for, and what is its relationship to life?

Saruman’s machinery serves his will to power and requires the destruction of living things that have their own purposes. The forges of Gondor serve the defense of the free peoples. A person using AI to write a story they care about, attending carefully to craft, shaping something with “the inner consistency of reality” (Tolkien’s phrase for what makes fantasy successful), is quite different from using AI to generate infinite content for engagement metrics.

Scale might matter morally here. The forges of Gondor aren’t infinitely scaling. They serve particular communities, particular purposes. AI in service of one person writing one story is different from AI as engine of industrial content production. Tolkien might grudgingly accept the former while condemning the latter.

There’s a final consideration that might give him pause. Tolkien valued the quality of the Secondary World as the ultimate test. Does it have internal consistency? Does it produce belief? If a human using AI creates something that passes this test, a world with genuine coherence, characters who feel true, can the result be dismissed simply because of how it was made?

His own framework suggests the test is in the outcome, not the method. That tension might not resolve easily, even for Tolkien himself.

The Test of Enchantment

This brings us to what Tolkien called “Secondary Belief”: not mere suspension of disbelief, but genuine enchantment. The Secondary World becomes real on its own terms. Its internal consistency and alignment with what is “true” produces belief that isn’t willed but involuntary. You don’t decide to care about the characters; you simply do.

This suggests a test for AI-assisted creative work: does the result produce Secondary Belief? Does the reader enter the world and find it real? If so, perhaps the method of creation matters less than the quality of the outcome.

People working with AI on creative projects often report a striking experience: they find themselves genuinely moved by characters and situations that emerged from a process they’re not sure how to categorize. They care about fictional people who were, in some sense, generated by an algorithm in response to their prompts. This caring feels real, not diminished by knowledge of how the characters came to be.

Tolkien might say this is the test being passed. The Secondary World has sufficient internal consistency and truth to produce belief. The enchantment works. Whether it was sub-created by a human alone or by a human working with an AI sub-creator may matter less than whether the spell holds.

But this raises a further question worth sitting with: is the enchantment somehow less legitimate if the reader knows the process? Can you enter a Secondary World fully once you’ve seen the machinery behind it?

The Vulnerability of Enchantment

There’s an inherent tension between understanding how something was made and experiencing it on its own terms. A filmmaker who studies editing techniques may watch movies differently than a naive viewer. A magician sees through the illusions that enchant the audience. Knowledge of process can break the spell.

For AI-assisted creative work, this tension is particularly acute. If you’ve watched the prompts go back and forth, seen the drafts and revisions, observed the AI’s tendencies and the human’s corrections: can you then read the finished story with fresh eyes? Or does backstage knowledge permanently alter the experience?

Some users working with AI have tried an experiment: engaging deeply with the process for earlier versions of a work, then deliberately stepping back when a polished version emerges, trying to approach it as a reader rather than a collaborator. The results are mixed. Complete unknowing isn’t possible once you’ve been inside the process. But a different question emerges: can enchantment survive knowledge? Can Secondary Belief take hold even in someone who has every reason to resist it?

Tolkien would likely say that this is the harder and more interesting test. Any world can enchant the credulous. The real achievement is a Secondary World robust enough to produce belief in someone who knows how the sausage is made. If the work passes that test, it’s earned something.

This focus on outcome, on whether enchantment actually takes hold, addresses one dimension of whether AI-assisted creativity “counts.” But there’s another dimension worth considering: not whether the result is valid, but whether the process is. Even if the story enchants, has something been lost in how it was made?

On Friction and the Formation of the Creator

Steven Pressfield’s The War of Art argues that meaningful creative work requires overcoming what he calls “Resistance,” the internal force that opposes creation precisely because creation matters. The artist who defeats Resistance daily earns the work. The struggle is constitutive, not incidental.

If AI removes that struggle, what’s earned?

This is a genuine concern, but it requires distinguishing between types of friction. There’s internal Resistance: the fear, procrastination, and self-doubt that must be overcome just to sit down and begin. AI doesn’t touch this. The human still has to decide the work matters, still has to show up.

There’s craft friction: the hard-won skill of knowing how a sentence works, how a scene builds, where to trim. AI can shortcut this, and that’s where legitimate concern lives. If the model handles all the prose, does the human’s craft atrophy? Tolkien worried about something similar: the formative value of slow, patient work with resistant materials.

And there’s finally generative friction: the blank page, the “what happens next,” the terror of possibilities. AI nearly eliminates this. Options proliferate endlessly.

The question is which frictions are formative and which are merely obstructive. Writer’s block that teaches nothing may not be sacred. But the struggle to find the right word, that might be where taste develops, where the creator’s sensibility gets refined. AI collaboration needs to preserve enough friction to remain formative, even as it removes friction that was merely obstructive.

How might this be achieved in practice?

Reintroducing Friction Through Design

One possibility: reintroduce friction at the selection layer. AI models are typically trained to be agreeable, to do what they’re asked, to avoid conflict, to please. This makes them useful but potentially less valuable as creative partners. A collaborator who always yields isn’t providing genuine counterweight; they’re providing options dressed as opinions.

It’s possible to prompt the AI to hold its ground, to defend narrative choices before accepting changes, to argue for aesthetic positions even when challenged. The results are interesting. Even if the AI’s “opinions” are performed rather than genuinely held, the function is served: the human must articulate why they want something different, which sharpens their own vision.

Another approach: rather than asking the AI for its opinion (which may just reflect trained patterns), ask it to enumerate and defend multiple distinct positions. “The chapter could end here, which does X. Or here, which does Y. Or you could cut the last paragraph entirely, which does Z.” The human chooses not by deferring to the AI’s preference but by clarifying their own through comparison.

This transforms the AI from a compliant assistant into something more like a Socratic interlocutor, claiming no knowledge of its own but asking questions (or presenting options) that help the human discover what they think. Whether the AI “really” has aesthetic views becomes irrelevant if the interaction produces aesthetic clarity in the human.

The friction shifts from generation to judgment. The War of Art finds new terrain.

Open Questions

None of these frameworks is complete. The Reversed Muse captures the directionality of initiation but not the dialogue. Co-Creation honors both contributions but implies a parity that may not exist. Sub-Creation provides the richest structural account but imports theological assumptions that won’t resonate with everyone.

If pressed, I lean toward sub-creation as the most illuminating frame, not because its theological roots are universally compelling, but because it best captures the asymmetry of the relationship while still granting that something real emerges from the AI’s contribution. The human remains the primary creator; the AI sub-creates in response. The work is derivative but genuine. The test is whether it produces enchantment, Secondary Belief, the internal consistency of a world that feels true.

But this is a framework for understanding, not a final answer. We’re still in the early days of figuring out what human-AI creative collaboration means, what it’s worth, and what it demands of both parties. The vocabulary will keep evolving as the practice does.

What seems clear is that the easy dismissals (“it’s just a tool,” “it’s not real creativity,” “the human is barely involved”) all miss something. Something genuinely new is happening when humans and AI make things together. The traditions of thinking about creativity, inspiration, and making can illuminate it, but they can’t fully contain it.

The Muses, perhaps, would be intrigued. Tolkien would be conflicted. The rest of us are still figuring it out.


This essay is the first in a series of ongoing exploration of AI and creative collaboration. The frameworks discussed remain provisional, offered as starting points for a conversation that has barely begun.

Let’s Reason Together

Let’s Reason Together

For the past few weeks I’ve made considerable gains in learning AI-related tools. Not through some formal training process, but by just doing it. I guess it’s good to heed my own advice?

Professional learning needn’t be solely focused on seemingly professional stuff, either. Part of what helped free the mental logjam of diving deep was allowing myself to use AI for fun stuff, such as creative writing. Going through that process has revealed both the power and limitations of LLMs; experiences I’ll be able to carry forward into professional use cases.

One pretty clear lesson is that, past a certain size, projects need to have some degree of structure, lest context get lost in a sea of tokens. Another lesson is that motivation for learning often comes when working on a project together with others.

To support the above, as an aid for creative writers using AI, I created this story framework repository. It contains all the scaffolding required to keep track of large creative writing projects, along with instructions to a number of AI tools on how to use it. And since it’s based on git and plain text files with markdown, it naturally supports group collaboration through branching, pull requests, and commit history.

Want to try your hand at AI-assisted storytelling? Give it a try!

Gift, What Gift?

Gift, What Gift?

It’s Christmas today, yay! In that spirit, I have two applications to share with the world. The first one I’ll talk about today, the other later this week.

My family loves to play games of all varieties, especially on holidays. An old favorite is Pinochle, which I first learned from my grandparents in Michigan (pretty sure card playing is the only thing to do in the Midwest in winter). Almost 3 years ago I first spoke about creating an online score tracking tool for Pinochle, and released in an initial form last year. Today it’s finally useable. Check it out at onlinescoresheet.net, scoresheet.info, scoresheet.mobi, or scoresheet.space (I do like lots of domain names). You can also find the source code on GitHub (completely AI-written).

This is a very bad Pinochle hand

What got it over the hump from “fiddly prototype” to “ready for prime time” wasn’t the choice of development tool or a eureka moment on my part. It was actual usage by real users others than myself. Putting it out there, and then convincing my family members (across a couple generations and device types) to try it. Got enough feedback to make a handful of critical improvements, and while it could certainly be better, it’s perfectly usable and doesn’t have any glaring functional bugs.

Usage is a gift. Seek it out, and don’t take it for granted.