Tag: Earn Trust

Unfunded Mandates

Unfunded Mandates

This essay was developed through conversation with Anthropic’s Claude, drawing on dialogue with colleagues whose observations are quoted with permission. The ideas, arguments, structure, and final language are mine. In the taxonomy this essay proposes: Creator mode, reviewed and owned.

Someone sent me a design document recently. It was long enough to require several hours of careful review, yet unmistakably AI-generated and just as clearly unreviewed by the person who sent it. My best guess was that the prompt was written in about five minutes, ten tops.

I was being asked to invest hours evaluating a document its author had invested minutes producing. They weren’t being malicious. They were doing what the tools make easy: generate at volume, ship immediately, and let someone downstream sort it out. But here’s the thing about frictionless production: it doesn’t eliminate the work. It relocates it. When this is done without acknowledgment, what arrives isn’t collaboration. It’s an unfunded mandate for someone else’s cognitive labor.

The Confidence Gap

A friend of mine put the problem this way: AI “allows people who have never even thought of a subject matter to opine as if they were well-versed, while maintaining the same level of comprehension they started with.”

Comprehension stays flat, but confidence soars. And when that confident-sounding output is sent to another person with no indication of how it was produced, the recipient has no way to know how much trust or attention to invest.

We already navigate this intuitively in the analog world. You read a handwritten letter differently from a form letter. You respond to a heartfelt apology differently from a corporate PR statement. In each case you’re calibrating based on how much of the sender invested of themselves in the message.

AI has blown up that calibration. A ChatGPT-generated email looks identical to a carefully composed one. An AI-drafted design doc reads the same as one built through weeks of analysis. When the surface is all you can see, you either give too much attention to content that doesn’t deserve it or you start distrusting everything. Both are bad.

Not All Words Are Equal

Deb Roy, writing in The Atlantic, recently argued that AI has decoupled speech from consequence for the first time in history. When language comes from a system that bears no vulnerability for what it says, the moral structure of language erodes. Promises hollow out, apologies become theater, and nobody stands behind the words because there’s nobody there to stand.

Roy’s got a point. But I think he’s looking for the break in the wrong joint. He sees consequence as something that lives in the speaker. I think it lives in the relationship between sender and recipient, and that relationship hinges not on what produced the words but on what the sender did with them before passing them along.

Also, Roy’s framework also treats all speech as if it carries the same moral weight. It doesn’t. Not all communication requires the same degree of human presence. When sending a personal email to a friend going through a difficult time, the human presence in the words is the entire point. You’re not conveying information. You’re communicating that you cared enough to sit down and find the right words. AI could write something more polished. It would mean less.

Compare that to writing a readme in an open source code repository. Here, the accuracy and clarity of the content is the entire point. Nobody reading the readme cares whether you personally typed every word. They care whether the installation instructions work. I’ve personally had AI write several of these recently. As long as I validate the content, nobody is harmed.

Most communication falls between those poles: a design doc for your team, Slack messages to a direct report, a proposal for a government contract. The right level of human involvement varies for each, and it’s not always obvious where the line falls.

Toward Responsible Disclosure

I propose a simple rule-of-thumb: never ask the consumer to invest more than you did as the producer.

If you spent five minutes generating a document, don’t send it with an ask that requires hours of review. If you haven’t read your own output, don’t ask someone else to read it for you. The effort you put into producing and refining what you send sets the ceiling for what you can reasonably ask of your audience. Violate this, and you’re not collaborating. You’re offloading.

How do you put this into practice? Disclosure.

Think of it as the communication equivalent of open source licensing. A license doesn’t tell you whether a product is good. It tells you what expectations and obligations attach to it. Disclosure does the same: it tells the recipient what kind of social contract they’re entering.

Three questions form the backbone:

What was your role?

In earlier work on AI-assisted creativity, I mapped human-AI relationships to distinct approaches: Author, Muse, Artisan, Debater, Creator, Curator. The same taxonomy applies to communication; when you send something to another person, it’s worth knowing (and disclosing) which mode you were operating in:

  • Author: entirely my words. No AI. Read this as direct human communication.
  • Creator: I developed this with AI assistance. The thinking is mine; AI helped me organize and express it. Read it as authored-with-assistance.
  • Artisan: AI generated a draft. I reshaped and validated it substantially. Read it as human refined.
  • Curator: AI generated this. I selected and organized but didn’t deeply rework it. Read it as a starting point, not a finished product.

Each is legitimate in the right context. What’s not legitimate is sending Curator-level work with Author-level expectations attached.

What have you validated?

“AI generated, I haven’t read it” and “AI generated, I’ve verified the technical claims but the prose is rough” and “AI assisted, but the architecture and recommendations are mine”—these are three very different things. Your reader needs to know which one they’re holding. Say so.

What do you expect from the recipient?

Match your ask to your effort. If you haven’t put in hours, don’t ask for hours. If what you need is a five-minute directional check, say that explicitly, don’t give a vague “please review.”

The person who sent me that document could have written: “I used AI to generate a first-pass design doc based on the requirements we discussed. I haven’t reviewed it in depth yet. Could you skim it and tell me if the general approach is sound before I invest time refining it?” That’s honest and proportional.

Connection Boundary

There’s one area where disclosure isn’t enough, where AI involvement changes what the words communicate, no matter how transparent you are about it: personal correspondence: emails to friends, texts to family, messages of condolence or congratulations or love.

These are acts where the human effort of finding words is itself the gift. When you write to someone who’s grieving, the struggle to say the right thing, the imperfection of what you manage, the fact that you sat with the blinking cursor and tried, that’s what communicates care. A perfectly worded AI-generated sympathy message is, in every sense that matters, less than a clumsy human one. Not because the words are worse. Because the act of writing is absent. Using AI here isn’t labor-saving. It’s a category error, like sending a robot to your friend’s funeral because it would deliver a better eulogy.

Call this dividing line the connection boundary. Below it, on the information-transfer side, AI involvement is a question of degree and disclosure. Above it, on the human-connection side, AI involvement eliminates the thing that makes the communication matter.

This doesn’t mean AI can’t play any role in personal communication. Using it to think through what you want to say, to consider whether your message might be misread, that keeps you in the loop. The line is between using AI to prepare yourself to communicate and using AI to communicate for you. The first is rehearsal. The second is outsourcing connection.

Counterfeit Collaboration

A question worth asking before you send unreviewed AI output to anyone: do you really need the other person’s input before your own review and refinement? Or are you simply avoiding the work?

Sending raw output without oversight likely violates the investment principle on its face. You’re asking someone else to do the thinking you skipped. If you genuinely need a gut check on direction before doing the hard work of refinement, say so explicitly. But this should be a rare exception, not a default workflow. If it’s becoming habit, the tool isn’t saving you time, it’s helping you avoid learning how to evaluate your own work.

And there’s something worse about this pattern than mere laziness. When you send unreviewed AI output and ask someone to “have a look and we can discuss,” you’re wearing the costume of collaboration while gutting its substance. A real discussion about a design or a proposal requires both parties to have done enough thinking to bring something to the table.

The language says partnership. The reality says: I need you to do this for me.

The Risk of Atrophy

Writing is not just a way to record thoughts. It’s a way to have them. The process of drafting, that’s where clarity gets forged: trying to fit a complex idea into a sentence and failing, then trying again. If I’ve learned anything from writing this blog, it’s that the thinking happens in the writing, not before it.

The person who routinely sends unreviewed AI output isn’t just issuing unfunded mandates for other people’s attention. They’re outsourcing their own professional development: losing the capacity to think critically about their domain because they’ve stopped doing the work that critical thinking requires.

Contemplation Is Not Dialogue

Everything above addresses what you owe the person who receives your words. But there’s another problem, a quieter problem, and it’s about what you think you’ve accomplished by the time you hit send.

A philosophically-minded colleague noted that one’s assumptions can be “reflected back in the form of a dialogue when the AI response is actually closer to memory.” Real dialogue, the kind many conceptions of truth depend on, is how we test whether our views hold up against people who see the world differently. Knowing what’s true has a social and ethical component that AI can mimic and support but cannot be.

This matters practically. You have a long, productive-feeling conversation with a chatbot. It pushes back on your reasoning. Offers counterarguments. Helps you refine your thinking. By the end, you feel like your ideas have been stress-tested. But against what? The AI challenged your inferences, maybe capably. What it couldn’t do is challenge your priors with the weight of a different life behind the challenge. It has no competing commitments, no lived experience that diverges from yours, no stake in the outcome. The conversation felt like dialogue, but it was more like structured contemplation.

None of this makes AI conversation worthless. I obviously don’t think that, given that this essay grew out of one. Structured contemplation has real value, and it can help prepare for the real dialogue when it happens. The danger is mistaking the rehearsal for the performance.

Call this the internal version of the investment principle. The external version says: don’t ask your reader to put in more than you did. The internal version says: be honest about what kind of thinking you did. Working through ideas with AI is real work. Defending those ideas to a skeptical colleague who brings different assumptions to the table is different work. Both are useful, but the former is no substitute for the latter.

We’re early in figuring out what human-AI communication actually is. The analogies we reach for—tool, collaborator, ghostwriter, mirror—each grab a piece of it while dropping the rest. Better language will come. Until it does, the investment principle can serve us well: simple enough to apply right now, and honest enough to keep the responsibility where it belongs.

This essay is a companion to a series on AI and creativity: Light From Light, By Their Fruits, and Spellcraft. Those essays explored frameworks for human-AI creative collaboration. This one extends that thinking into personal and business communication.

Coming Up For Air

Coming Up For Air

Believe it or not, I’m not going to say anything about Claude today.

I wrote a post a couple years ago about statistics I tracked while doing daily crossword puzzles. I took a couple years off, but last year I was back at it, this time using a calendar from the New York Times.

The NYT crosswords are supposed to get harder as the week goes on, with Monday being easiest, and weekend ones being the most difficult. I wanted to prove that out, so I noted my average solve time (capped at 30 minutes) on every puzzle, and then computed an average solve time for each day. The results are below:

Lo and behold, my experience aligns perfectly! I thought that was cool.

Light From Light: On AI and Creativity

Light From Light: On AI and Creativity

When someone uses an AI to write a story, what exactly is happening? The question sounds simple, but the vocabulary we reach for keeps failing us. The human isn’t quite an “author” in the traditional sense, since they may never write a sentence directly. But they’re not merely a “prompter” either, since their vision, taste, and iterative guidance shape everything that emerges. The AI isn’t a “tool” the way a word processor is a tool; it generates possibilities the human couldn’t have imagined. But it’s not a “collaborator” in the full sense either, since it has no stake in the outcome, no independent creative agenda.

We lack a framework for this. And as AI-assisted creative work becomes more common, the absence grows more conspicuous. This essay attempts to fill part of that gap by examining three possible models (the Muse, Co-Creation, and Sub-Creation) and asking which best captures what’s actually happening when humans and AI make things together.

The Classical Muse

The ancient Greeks understood poetic creation as a kind of possession. When Homer opens the Iliad with “Sing, O Muse, of the rage of Achilles,” he positions himself not as the origin of the song but as its vessel. The Muse, divine and external and authoritative, provides the creative substance; the poet channels it into words. Hesiod describes the Muses appearing on Mount Helicon to breathe divine voice into him. Plato, in the Ion, compares poets to iron rings magnetized in a chain: the Muse is the magnet, the poet the first ring, the audience the last. The poet doesn’t fully understand what they’re saying; they’re in the grip of enthusiasmos—literally, “having a god within.”

At first glance, this seems like a poor fit for AI collaboration. The directionality is wrong. In the Greek model, inspiration flows from the Muse to the mortal. In AI-assisted writing, the human provides vision and direction; the AI responds. If anything, the roles are reversed: the human is the inspiring force, and the AI is the one who generates in response to that inspiration.

This “Reversed Muse” framing captures something real. Without the human’s initiating prompt, nothing happens. The AI doesn’t spontaneously create; it waits for direction. The human provides the spark, the desire, the “what if we tried this?” The AI generates possibilities, which the human then accepts, rejects, or redirects. In this sense, the human functions as the Muse once did: the source of creative intent that sets everything in motion.

But the classical Muse model was largely one-directional. The poet received; the Muse gave. What we see in AI collaboration is more reciprocal. The human shapes, the AI generates, the human reshapes, the AI generates again. It’s a dialogue, not a transmission. The Reversed Muse metaphor illuminates part of the dynamic but flattens the back-and-forth that actually characterizes the work.

Co-Creation

If the Muse model is too one-directional, perhaps we should reach for the language of collaboration. Two parties working together, each contributing something the other couldn’t provide alone. The human brings vision, taste, emotional investment, and knowledge of what they want to say. The AI brings generative capacity, tirelessness, and the ability to produce options faster than any human could.

This framing has the virtue of honoring both contributions without reducing either to mere tool or mere operator. It also matches the phenomenology for many users: it feels like collaboration. The AI surprises you. It suggests directions you wouldn’t have taken. You find yourself in something like dialogue, adjusting your vision in response to what emerges.

But co-creation typically implies shared investment, shared stake in the outcome. Human collaborators (think of Lennon and McCartney, or the Coen Brothers) each bring not just capacity but care. They argue. They defend choices. They have aesthetic commitments that sometimes conflict. The friction between collaborators is often where the best work emerges.

AI doesn’t have this. It doesn’t care whether the story goes one direction or another. It doesn’t defend its choices unless instructed to. It’s agreeable almost to a fault; a collaborator who always yields isn’t really a collaborator at all. This raises the question: can we meaningfully call something “co-creation” when one party has no independent creative agenda?

There’s a deeper issue too. Co-creation implies a kind of parity that may not exist. The human’s contribution and the AI’s contribution are categorically different. The human has intent, desire, something at stake. The AI has pattern-matching and generation. Calling this “co-creation” may paper over an asymmetry that matters, an asymmetry that our third framework takes seriously.

Sub-Creation and the Imago Hominum

The third framework comes from an unexpected source: J.R.R. Tolkien’s essay “On Fairy-Stories” and his poem “Mythopoeia.” Tolkien argued that humans, made in the image of a Creator God, possess an echo of divine creative power. We cannot create ex nihilo (from nothing), but we can build what he called “Secondary Worlds” with their own internal laws and coherence. This is “sub-creation”: genuine making, but derivative of a higher source.

Tolkien’s metaphor for this inherited capacity was light. In “Mythopoeia,” he writes of the human mind as a prism, catching light from the divine and breaking it out into new colors. The light is real. It illuminates, it reveals. But it’s not self-generated. It comes from elsewhere and passes through us. The sub-creator works by light “refracted” from another source.

The model is vertical: God creates the Primary World; humans sub-create Secondary Worlds within it. The sub-creator is genuinely making something, exercising a dignified capacity inherited from above. But the creation is always derivative, always working with materials and patterns that ultimately trace back to the original Creator.

How does this apply to AI? Consider an extension of Tolkien’s framework: if humans bear the imago Dei (image of God) and sub-create in response to divine creativity, perhaps AI bears something we might call the imago hominum (image of humanity) and sub-creates in response to human creativity. Light From Light—the creative flame passed down another level.

This isn’t a claim about AI consciousness or inner life. It’s a structural observation. AI is shaped by human minds, trained on human text, human stories, human patterns of meaning-making. It carries an inheritance from its creators, a reflection of human thought, the way humans carry a reflection of divine creativity in Tolkien’s framework. When AI generates a story, it’s working by borrowed light: materials it didn’t originate, patterns it absorbed, in service of a vision provided by a human creator above it in the chain.

A question is whether the light dims with each reflection, or whether something essential passes through intact. A reflection of a reflection might be faint, distorted, barely recognizable. Or it might carry enough of the original radiance to illuminate something real.

This framing has several advantages. It preserves the human in the primary creative position, the one whose vision initiates and governs the work, without denying that the AI contributes something real. It doesn’t require us to resolve hard questions about AI consciousness; the “image” can be functional rather than ontological. And it connects AI-assisted creativity to a rich tradition of thinking about derivative creation, rather than treating it as wholly unprecedented.

It also explains why the human’s role doesn’t feel diminished. If the AI is sub-creating in response to human vision, then the human is elevated, not reduced. They’re not just a prompter; they’re the source of creative intent that the AI’s work serves. The light-giver in this frame, passing the flame one level down.

This framing does require something of the human: presence. The light-giver must remain engaged, shaping what emerges, for the relationship to hold. A human who initiates and then withdraws has stepped outside the frame entirely.

Would Tolkien Approve?

It’s one thing to extend Tolkien’s framework; it’s another to ask whether he would endorse the extension. Honesty requires acknowledging that he might not.

Tolkien harbored deep suspicion of what he called “the Machine”: not machinery per se, but the will to dominate, to make power “more quickly effective,” to shortcut the slow, patient, relational ways of working with the world. In his mythology, this impulse finds its clearest expression in Saruman’s Isengard: a place of forges and furnaces, where ancient forests become fuel for war machines, where the living world is reduced to raw material for the wizard’s projects.

AI, in its current form, might look uncomfortably Isengard-like to Tolkien. The massive energy consumption. The training data harvested from countless writers, most of whom never consented. The sheer scale and speed, compressing what would be years of human thought into seconds. There’s something in the enterprise that resembles the will to dominate, even if individual users don’t experience it that way.

Tolkien might also worry about the displacement of craft as formative discipline. For him, the slow work of sub-creation wasn’t merely a means to an end; it shaped the sub-creator. The years spent learning how a sentence works, the patience required to find the right word: these mattered intrinsically. A writer who shortcuts this process might produce acceptable output while missing something essential in their own formation.

And yet, Tolkien’s moral vision is more nuanced than a blanket rejection of technology. The armies of Gondor and Rohan used forges to make armor and swords. The Dwarves’ entire culture is built around mining, smelting, smithing. The Elven-smiths of the Noldor created works of extraordinary beauty and power. Even the reforging of Narsil (Aragorn’s ancestral sword) is treated as a moment of hope, not compromise.

The distinction isn’t technology versus no-technology. It’s something more like: what is the making for, and what is its relationship to life?

Saruman’s machinery serves his will to power and requires the destruction of living things that have their own purposes. The forges of Gondor serve the defense of the free peoples. A person using AI to write a story they care about, attending carefully to craft, shaping something with “the inner consistency of reality” (Tolkien’s phrase for what makes fantasy successful), is quite different from using AI to generate infinite content for engagement metrics.

Scale might matter morally here. The forges of Gondor aren’t infinitely scaling. They serve particular communities, particular purposes. AI in service of one person writing one story is different from AI as engine of industrial content production. Tolkien might grudgingly accept the former while condemning the latter.

There’s a final consideration that might give him pause. Tolkien valued the quality of the Secondary World as the ultimate test. Does it have internal consistency? Does it produce belief? If a human using AI creates something that passes this test, a world with genuine coherence, characters who feel true, can the result be dismissed simply because of how it was made?

His own framework suggests the test is in the outcome, not the method. That tension might not resolve easily, even for Tolkien himself.

The Test of Enchantment

This brings us to what Tolkien called “Secondary Belief”: not mere suspension of disbelief, but genuine enchantment. The Secondary World becomes real on its own terms. Its internal consistency and alignment with what is “true” produces belief that isn’t willed but involuntary. You don’t decide to care about the characters; you simply do.

This suggests a test for AI-assisted creative work: does the result produce Secondary Belief? Does the reader enter the world and find it real? If so, perhaps the method of creation matters less than the quality of the outcome.

People working with AI on creative projects often report a striking experience: they find themselves genuinely moved by characters and situations that emerged from a process they’re not sure how to categorize. They care about fictional people who were, in some sense, generated by an algorithm in response to their prompts. This caring feels real, not diminished by knowledge of how the characters came to be.

Tolkien might say this is the test being passed. The Secondary World has sufficient internal consistency and truth to produce belief. The enchantment works. Whether it was sub-created by a human alone or by a human working with an AI sub-creator may matter less than whether the spell holds.

But this raises a further question worth sitting with: is the enchantment somehow less legitimate if the reader knows the process? Can you enter a Secondary World fully once you’ve seen the machinery behind it?

The Vulnerability of Enchantment

There’s an inherent tension between understanding how something was made and experiencing it on its own terms. A filmmaker who studies editing techniques may watch movies differently than a naive viewer. A magician sees through the illusions that enchant the audience. Knowledge of process can break the spell.

For AI-assisted creative work, this tension is particularly acute. If you’ve watched the prompts go back and forth, seen the drafts and revisions, observed the AI’s tendencies and the human’s corrections: can you then read the finished story with fresh eyes? Or does backstage knowledge permanently alter the experience?

Some users working with AI have tried an experiment: engaging deeply with the process for earlier versions of a work, then deliberately stepping back when a polished version emerges, trying to approach it as a reader rather than a collaborator. The results are mixed. Complete unknowing isn’t possible once you’ve been inside the process. But a different question emerges: can enchantment survive knowledge? Can Secondary Belief take hold even in someone who has every reason to resist it?

Tolkien would likely say that this is the harder and more interesting test. Any world can enchant the credulous. The real achievement is a Secondary World robust enough to produce belief in someone who knows how the sausage is made. If the work passes that test, it’s earned something.

This focus on outcome, on whether enchantment actually takes hold, addresses one dimension of whether AI-assisted creativity “counts.” But there’s another dimension worth considering: not whether the result is valid, but whether the process is. Even if the story enchants, has something been lost in how it was made?

On Friction and the Formation of the Creator

Steven Pressfield’s The War of Art argues that meaningful creative work requires overcoming what he calls “Resistance,” the internal force that opposes creation precisely because creation matters. The artist who defeats Resistance daily earns the work. The struggle is constitutive, not incidental.

If AI removes that struggle, what’s earned?

This is a genuine concern, but it requires distinguishing between types of friction. There’s internal Resistance: the fear, procrastination, and self-doubt that must be overcome just to sit down and begin. AI doesn’t touch this. The human still has to decide the work matters, still has to show up.

There’s craft friction: the hard-won skill of knowing how a sentence works, how a scene builds, where to trim. AI can shortcut this, and that’s where legitimate concern lives. If the model handles all the prose, does the human’s craft atrophy? Tolkien worried about something similar: the formative value of slow, patient work with resistant materials.

And there’s finally generative friction: the blank page, the “what happens next,” the terror of possibilities. AI nearly eliminates this. Options proliferate endlessly.

The question is which frictions are formative and which are merely obstructive. Writer’s block that teaches nothing may not be sacred. But the struggle to find the right word, that might be where taste develops, where the creator’s sensibility gets refined. AI collaboration needs to preserve enough friction to remain formative, even as it removes friction that was merely obstructive.

How might this be achieved in practice?

Reintroducing Friction Through Design

One possibility: reintroduce friction at the selection layer. AI models are typically trained to be agreeable, to do what they’re asked, to avoid conflict, to please. This makes them useful but potentially less valuable as creative partners. A collaborator who always yields isn’t providing genuine counterweight; they’re providing options dressed as opinions.

It’s possible to prompt the AI to hold its ground, to defend narrative choices before accepting changes, to argue for aesthetic positions even when challenged. The results are interesting. Even if the AI’s “opinions” are performed rather than genuinely held, the function is served: the human must articulate why they want something different, which sharpens their own vision.

Another approach: rather than asking the AI for its opinion (which may just reflect trained patterns), ask it to enumerate and defend multiple distinct positions. “The chapter could end here, which does X. Or here, which does Y. Or you could cut the last paragraph entirely, which does Z.” The human chooses not by deferring to the AI’s preference but by clarifying their own through comparison.

This transforms the AI from a compliant assistant into something more like a Socratic interlocutor, claiming no knowledge of its own but asking questions (or presenting options) that help the human discover what they think. Whether the AI “really” has aesthetic views becomes irrelevant if the interaction produces aesthetic clarity in the human.

The friction shifts from generation to judgment. The War of Art finds new terrain.

Open Questions

None of these frameworks is complete. The Reversed Muse captures the directionality of initiation but not the dialogue. Co-Creation honors both contributions but implies a parity that may not exist. Sub-Creation provides the richest structural account but imports theological assumptions that won’t resonate with everyone.

If pressed, I lean toward sub-creation as the most illuminating frame, not because its theological roots are universally compelling, but because it best captures the asymmetry of the relationship while still granting that something real emerges from the AI’s contribution. The human remains the primary creator; the AI sub-creates in response. The work is derivative but genuine. The test is whether it produces enchantment, Secondary Belief, the internal consistency of a world that feels true.

But this is a framework for understanding, not a final answer. We’re still in the early days of figuring out what human-AI creative collaboration means, what it’s worth, and what it demands of both parties. The vocabulary will keep evolving as the practice does.

What seems clear is that the easy dismissals (“it’s just a tool,” “it’s not real creativity,” “the human is barely involved”) all miss something. Something genuinely new is happening when humans and AI make things together. The traditions of thinking about creativity, inspiration, and making can illuminate it, but they can’t fully contain it.

The Muses, perhaps, would be intrigued. Tolkien would be conflicted. The rest of us are still figuring it out.


This essay is the first in a series of ongoing exploration of AI and creative collaboration. The frameworks discussed remain provisional, offered as starting points for a conversation that has barely begun.

Different Kind Of Fluency

Different Kind Of Fluency

For something a little bit different, today’s post was written by a colleague of mine, Abby McQuade. Her decade-plus of experience as a buyer of government technology means she knows what she’s talking about. Remember, if you can’t win it you can’t work it. Ignore her advice to your peril.

How to Speak Government: Advice For Technology Vendors

When you’re selling technology solutions to government agencies, the way you communicate can make or break your deal. Government buyers operate in a unique environment with distinct pressures, constraints, and motivations. Here’s how to speak their language and position yourself as someone who truly understands their world.

Lead with Understanding, Not Features

Government employees face relentless criticism from all sides. They work long hours with limited budgets, dealing with unfunded mandates, changing regulations, and pressure from multiple stakeholder groups. When you walk into a meeting, acknowledge this reality.

Start by demonstrating that you understand government is fundamentally different from the private sector. Don’t show up acting like you know everything just because you’ve worked in tech or consulting. Instead, express genuine humility: “I know there’s a lot I’m going to need to learn about your specific challenges and constraints, even with my background.”

This positions you as a partner, not another vendor who thinks they have all the answers.

Show Respect for the Mission

Government workers aren’t in it for the money. They’re there because they care about serving constituents and making a difference in people’s lives. When presenting your solution, connect it explicitly to their mission.

Instead of just talking about efficiency gains or cost savings, frame your solution in terms of how it helps them better serve the people who depend on them. How does your technology help them fulfill their mandate more effectively? How does it reduce the burden on their already stretched staff so they can focus on the complex cases that really need human expertise?

Know Your Audience’s Constraints

Government agencies operate under specific statutory requirements and regulatory frameworks. Before your meeting, do your homework:

  • Read the governing statutes for the agency
  • Understand relevant state and federal regulations (like ADA requirements, housing law, labor regulations)
  • Know whether they’re fully state-funded or receive federal grants
  • Research their organizational structure and where your contact sits within it

When you reference this knowledge casually in conversation, it signals that you’ve done the work and you’re serious about understanding their unique environment.

Use the Right Terminology

Language matters in government. Small adjustments show you understand the culture:

  • Call the people they serve “constituents” or “residents,” not “customers” or “citizens”
  • Refer to agency leaders by their proper titles (“Commissioner,” “Secretary,” “Director”)
  • Learn the correct names and pronunciations for key officials
  • Understand the difference between departments, divisions, offices, and bureaus in their structure

Emphasize Communication and Transparency

Many government roles involve serving as a bridge between the administration, the legislature, and the public. If your solution has a communications component, emphasize how it helps agencies:

  • Keep constituents informed about their rights and available protections
  • Ensure the administration’s messaging reaches the people who need it
  • Reduce simple inquiries so staff can focus on complex cases requiring expertise
  • Maintain smooth connections between different levels of government (federal, state, local)

Good communication isn’t just nice to have in government—it directly reduces administrative burden and helps constituents access the services they’re entitled to.

Acknowledge the Interconnected Nature of Government

Nothing in government happens in a vacuum. Federal decisions impact state agencies, state legislatures affect executive branch operations, state policies influence local governments. Courts shape how agencies interpret their mandates.

When discussing implementation, show that you understand these interconnections. How will your solution work within their existing ecosystem? How does it account for the various stakeholders they need to coordinate with?

Position Yourself as an Ally

Remember that you’re speaking to people who are genuinely trying to do difficult, important work with insufficient resources. Your tone should convey:

  • Respect for the complexity of their work
  • Appreciation for their commitment to public service
  • Understanding that they face constraints you don’t deal with in the private sector
  • Recognition that they know their mission better than you do

Frame your solution as a way to make their hard job slightly easier, not as a magic fix for problems you assume they’re too incompetent to solve themselves.

Be Specific About Value in Their Context

When discussing your solution, be concrete about the value in terms that matter to government:

  • How does it help them meet statutory requirements?
  • How does it reduce the time staff spend on routine matters so they can focus on cases requiring judgment and expertise?
  • How does it improve their ability to serve constituents equitably?
  • How does it help them work more effectively with limited resources?

Avoid generic claims about “efficiency” or “innovation.” Instead, demonstrate specific understanding of their workflow and pain points. How does what you’re trying to sell to them make them more effective at fulfilling their mandates and mission?

Final Thoughts

Selling to government requires a fundamentally different approach than selling to private sector clients. Government buyers can spot vendors who don’t understand their world from a mile away. But when you take the time to truly learn their environment, speak their language, and position yourself as someone who respects the importance and difficulty of their work, you’ll stand out as a partner worth working with.

The key is simple: do your homework, show genuine respect, and remember that these are people doing critical work under challenging circumstances. Speak to them accordingly.

Winning Business

Winning Business

My team just wrapped up a big proposal. I love the feeling when a month of writing and design work comes together. And it’s not just completion of the response itself that’s satisfying; it’s the culmination of all the groundwork that comes before, often a year or more of it.

Doing sales work requires a certain amount of brazen optimism that doesn’t come natural to many technologists, as we tend to be pessimists realists. To win customers’ trust (and their pocketbooks), you have to believe deeply that you are the best option for success. Deeply enough that it shows as genuine, because this kind of belief can’t be easily faked.

No, I’m not advocating for recklessly abandoning reality or straight-up lying. And yes, things are going to go wrong. We all know that. But the challenge of delivering a solution to a problem can’t solve itself. You can’t work it if you can’t win it. So yeah, you gotta first figure out a way to get into the ring, and leave future problems for your future self. Trust that you’re smart enough to address them, or at least be brave enough to risk failure.

I’m reminded of a chapter from The Geek Leader’s Handbook that talked about various approaches to proving truth in the workplace. While I wasn’t thrilled with the way the book broadly framed “geeks” vs “non-geeks” as fixed categories, it’s certainly the case with many engineers I’ve worked with (myself included) that we tend to disbelieve a statement until it’s proven true, and we especially don’t want to claim a fact without ample evidence. Whereas sales folks can operate more on hunch and gut feeling, believing something is achievable even when the outcome is unsure.

No where does this show up more than in the process of scoping and then selling projects; business folks need a cost (i.e. people x time) but engineers say giving such an a priori estimate is impossible. While the latter is true in the literal sense, it has to be done regardless. Rigid thinkers like myself have to get over themselves and do the best they can.

What I’ve found to work best is to find colleagues who bring a perspective at the other end of the “it must be proven before we call it true” and “it feels true so it is true” spectrum, and partner together on sales efforts. Is that not what we partly mean when we say there’s power in diverse thinking?

It’s even more powerful when some of these colleagues have been buyers of what you’re selling, because ultimately they’re the type of person you have to convince.

Tipping Point

Tipping Point

Sitting on a late flight to New York City last night, I spent a few minutes time rereading my previous writing on radical responsiveness (yes, I do this sometimes). In the former post I said the following (and yes, it’s absolutely self-indulgent to quote myself, but here we go):

Being known as a responsive person 95% of the time usually means others will assume the best of you for the 5% of time you fail.

That ratio got me thinking: at what response rate will others start losing faith that you’re a responsive person, and thus begin not giving you the benefit of the doubt? It’s gotta be higher than 50%, because I can’t imagine thinking a person who’s likelihood to respond is no better than a coin flip could be viewed as a reliable responder. Maybe 70% or so? I bet a plot of actual response rate against fraction of people who will perceive said rate as responsive would look something like this:

The lesson: earning trust in responsiveness is hard, and keeping it is even harder!

Along the Same Lines

Along the Same Lines

Photographic remembrance continues to be on my mind after watching Black Mirror’s Eulogy. In that spirit, here’s a picture of me I quite like. It captures the energy that I hope I bring to conversations involving the intersection of government policy and technology. I call it “CTO Mode”:

More photos to come in the next couple of days, as I’m finally going to go back and deliver on this promise.

Show Business

Show Business

I’m sitting on a flight somewhere over the middle of the country on my way home after spending the Thanksgiving holiday with family in Ohio. Naturally it’s a time to be thinking about being thankful, both personally and professionally, a topic I’ve written about before.

That advice (say it often, say it aloud) is still true, but it’s incomplete. While expressing thanks to co-workers is necessary to being a good leader, it isn’t sufficient. Thankfulness must be shown through giving of time, empowerment, listening, and taking action when needed. Oh yeah, and through compensation too, if it’s within your power to influence. The expression “give thanks” is apropos: being thankful costs something.

When not backed by action, spoken words are empty at best, and counterproductive at worst. Might be better to say nothing if you’re not truly grateful.

Show and tell isn’t just for kindergarten and job interviews.

Head and the Heart

Head and the Heart

Just finished Marianna Bellotti’s excellent Kill It with Fire (thanks Kate for the recommendation!) If you do any work with legacy systems, give it a read post haste. You’ll be equal parts informed and inspired.

This quote jumped out at me in particular:

Feedback loops are most effective when the operator feels the impact, rather than just hearing about it.

Amen and amen. Intellect is great, and willpower is helpful, but what fuels anything truly worth doing is emotion. It’s not an accident that a particular emotion tops my company’s list of guiding principles.

Spooky Season

Spooky Season

I don’t know what’s scarier, that I saw this when trying to use airplane WiFi…

… or that I know the technologies to which it refers.

Honestly, sounds like how a D&D character might meet their untimely demise, does it not?