Unfunded Mandates
This essay was developed through conversation with Anthropic’s Claude, drawing on dialogue with colleagues whose observations are quoted with permission. The ideas, arguments, structure, and final language are mine. In the taxonomy this essay proposes: Creator mode, reviewed and owned.
Someone sent me a design document recently. It was long enough to require several hours of careful review, yet unmistakably AI-generated and just as clearly unreviewed by the person who sent it. My best guess was that the prompt was written in about five minutes, ten tops.
I was being asked to invest hours evaluating a document its author had invested minutes producing. They weren’t being malicious. They were doing what the tools make easy: generate at volume, ship immediately, and let someone downstream sort it out. But here’s the thing about frictionless production: it doesn’t eliminate the work. It relocates it. When this is done without acknowledgment, what arrives isn’t collaboration. It’s an unfunded mandate for someone else’s cognitive labor.
The Confidence Gap
A friend of mine put the problem this way: AI “allows people who have never even thought of a subject matter to opine as if they were well-versed, while maintaining the same level of comprehension they started with.”
Comprehension stays flat, but confidence soars. And when that confident-sounding output is sent to another person with no indication of how it was produced, the recipient has no way to know how much trust or attention to invest.
We already navigate this intuitively in the analog world. You read a handwritten letter differently from a form letter. You respond to a heartfelt apology differently from a corporate PR statement. In each case you’re calibrating based on how much of the sender invested of themselves in the message.
AI has blown up that calibration. A ChatGPT-generated email looks identical to a carefully composed one. An AI-drafted design doc reads the same as one built through weeks of analysis. When the surface is all you can see, you either give too much attention to content that doesn’t deserve it or you start distrusting everything. Both are bad.
Not All Words Are Equal
Deb Roy, writing in The Atlantic, recently argued that AI has decoupled speech from consequence for the first time in history. When language comes from a system that bears no vulnerability for what it says, the moral structure of language erodes. Promises hollow out, apologies become theater, and nobody stands behind the words because there’s nobody there to stand.
Roy’s got a point. But I think he’s looking for the break in the wrong joint. He sees consequence as something that lives in the speaker. I think it lives in the relationship between sender and recipient, and that relationship hinges not on what produced the words but on what the sender did with them before passing them along.
Also, Roy’s framework also treats all speech as if it carries the same moral weight. It doesn’t. Not all communication requires the same degree of human presence. When sending a personal email to a friend going through a difficult time, the human presence in the words is the entire point. You’re not conveying information. You’re communicating that you cared enough to sit down and find the right words. AI could write something more polished. It would mean less.
Compare that to writing a readme in an open source code repository. Here, the accuracy and clarity of the content is the entire point. Nobody reading the readme cares whether you personally typed every word. They care whether the installation instructions work. I’ve personally had AI write several of these recently. As long as I validate the content, nobody is harmed.
Most communication falls between those poles: a design doc for your team, Slack messages to a direct report, a proposal for a government contract. The right level of human involvement varies for each, and it’s not always obvious where the line falls.
Toward Responsible Disclosure
I propose a simple rule-of-thumb: never ask the consumer to invest more than you did as the producer.
If you spent five minutes generating a document, don’t send it with an ask that requires hours of review. If you haven’t read your own output, don’t ask someone else to read it for you. The effort you put into producing and refining what you send sets the ceiling for what you can reasonably ask of your audience. Violate this, and you’re not collaborating. You’re offloading.
How do you put this into practice? Disclosure.
Think of it as the communication equivalent of open source licensing. A license doesn’t tell you whether a product is good. It tells you what expectations and obligations attach to it. Disclosure does the same: it tells the recipient what kind of social contract they’re entering.
Three questions form the backbone:
What was your role?
In earlier work on AI-assisted creativity, I mapped human-AI relationships to distinct approaches: Author, Muse, Artisan, Debater, Creator, Curator. The same taxonomy applies to communication; when you send something to another person, it’s worth knowing (and disclosing) which mode you were operating in:
- Author: entirely my words. No AI. Read this as direct human communication.
- Creator: I developed this with AI assistance. The thinking is mine; AI helped me organize and express it. Read it as authored-with-assistance.
- Artisan: AI generated a draft. I reshaped and validated it substantially. Read it as human refined.
- Curator: AI generated this. I selected and organized but didn’t deeply rework it. Read it as a starting point, not a finished product.
Each is legitimate in the right context. What’s not legitimate is sending Curator-level work with Author-level expectations attached.
What have you validated?
“AI generated, I haven’t read it” and “AI generated, I’ve verified the technical claims but the prose is rough” and “AI assisted, but the architecture and recommendations are mine”—these are three very different things. Your reader needs to know which one they’re holding. Say so.
What do you expect from the recipient?
Match your ask to your effort. If you haven’t put in hours, don’t ask for hours. If what you need is a five-minute directional check, say that explicitly, don’t give a vague “please review.”
The person who sent me that document could have written: “I used AI to generate a first-pass design doc based on the requirements we discussed. I haven’t reviewed it in depth yet. Could you skim it and tell me if the general approach is sound before I invest time refining it?” That’s honest and proportional.
Connection Boundary
There’s one area where disclosure isn’t enough, where AI involvement changes what the words communicate, no matter how transparent you are about it: personal correspondence: emails to friends, texts to family, messages of condolence or congratulations or love.
These are acts where the human effort of finding words is itself the gift. When you write to someone who’s grieving, the struggle to say the right thing, the imperfection of what you manage, the fact that you sat with the blinking cursor and tried, that’s what communicates care. A perfectly worded AI-generated sympathy message is, in every sense that matters, less than a clumsy human one. Not because the words are worse. Because the act of writing is absent. Using AI here isn’t labor-saving. It’s a category error, like sending a robot to your friend’s funeral because it would deliver a better eulogy.
Call this dividing line the connection boundary. Below it, on the information-transfer side, AI involvement is a question of degree and disclosure. Above it, on the human-connection side, AI involvement eliminates the thing that makes the communication matter.
This doesn’t mean AI can’t play any role in personal communication. Using it to think through what you want to say, to consider whether your message might be misread, that keeps you in the loop. The line is between using AI to prepare yourself to communicate and using AI to communicate for you. The first is rehearsal. The second is outsourcing connection.
Counterfeit Collaboration
A question worth asking before you send unreviewed AI output to anyone: do you really need the other person’s input before your own review and refinement? Or are you simply avoiding the work?
Sending raw output without oversight likely violates the investment principle on its face. You’re asking someone else to do the thinking you skipped. If you genuinely need a gut check on direction before doing the hard work of refinement, say so explicitly. But this should be a rare exception, not a default workflow. If it’s becoming habit, the tool isn’t saving you time, it’s helping you avoid learning how to evaluate your own work.
And there’s something worse about this pattern than mere laziness. When you send unreviewed AI output and ask someone to “have a look and we can discuss,” you’re wearing the costume of collaboration while gutting its substance. A real discussion about a design or a proposal requires both parties to have done enough thinking to bring something to the table.
The language says partnership. The reality says: I need you to do this for me.
The Risk of Atrophy
Writing is not just a way to record thoughts. It’s a way to have them. The process of drafting, that’s where clarity gets forged: trying to fit a complex idea into a sentence and failing, then trying again. If I’ve learned anything from writing this blog, it’s that the thinking happens in the writing, not before it.
The person who routinely sends unreviewed AI output isn’t just issuing unfunded mandates for other people’s attention. They’re outsourcing their own professional development: losing the capacity to think critically about their domain because they’ve stopped doing the work that critical thinking requires.
Contemplation Is Not Dialogue
Everything above addresses what you owe the person who receives your words. But there’s another problem, a quieter problem, and it’s about what you think you’ve accomplished by the time you hit send.
A philosophically-minded colleague noted that one’s assumptions can be “reflected back in the form of a dialogue when the AI response is actually closer to memory.” Real dialogue, the kind many conceptions of truth depend on, is how we test whether our views hold up against people who see the world differently. Knowing what’s true has a social and ethical component that AI can mimic and support but cannot be.
This matters practically. You have a long, productive-feeling conversation with a chatbot. It pushes back on your reasoning. Offers counterarguments. Helps you refine your thinking. By the end, you feel like your ideas have been stress-tested. But against what? The AI challenged your inferences, maybe capably. What it couldn’t do is challenge your priors with the weight of a different life behind the challenge. It has no competing commitments, no lived experience that diverges from yours, no stake in the outcome. The conversation felt like dialogue, but it was more like structured contemplation.
None of this makes AI conversation worthless. I obviously don’t think that, given that this essay grew out of one. Structured contemplation has real value, and it can help prepare for the real dialogue when it happens. The danger is mistaking the rehearsal for the performance.
Call this the internal version of the investment principle. The external version says: don’t ask your reader to put in more than you did. The internal version says: be honest about what kind of thinking you did. Working through ideas with AI is real work. Defending those ideas to a skeptical colleague who brings different assumptions to the table is different work. Both are useful, but the former is no substitute for the latter.
We’re early in figuring out what human-AI communication actually is. The analogies we reach for—tool, collaborator, ghostwriter, mirror—each grab a piece of it while dropping the rest. Better language will come. Until it does, the investment principle can serve us well: simple enough to apply right now, and honest enough to keep the responsibility where it belongs.
This essay is a companion to a series on AI and creativity: Light From Light, By Their Fruits, and Spellcraft. Those essays explored frameworks for human-AI creative collaboration. This one extends that thinking into personal and business communication.

