Tag: Customer Obsession

Unfunded Mandates

Unfunded Mandates

This essay was developed through conversation with Anthropic’s Claude, drawing on dialogue with colleagues whose observations are quoted with permission. The ideas, arguments, structure, and final language are mine. In the taxonomy this essay proposes: Creator mode, reviewed and owned.

Someone sent me a design document recently. It was long enough to require several hours of careful review, yet unmistakably AI-generated and just as clearly unreviewed by the person who sent it. My best guess was that the prompt was written in about five minutes, ten tops.

I was being asked to invest hours evaluating a document its author had invested minutes producing. They weren’t being malicious. They were doing what the tools make easy: generate at volume, ship immediately, and let someone downstream sort it out. But here’s the thing about frictionless production: it doesn’t eliminate the work. It relocates it. When this is done without acknowledgment, what arrives isn’t collaboration. It’s an unfunded mandate for someone else’s cognitive labor.

The Confidence Gap

A friend of mine put the problem this way: AI “allows people who have never even thought of a subject matter to opine as if they were well-versed, while maintaining the same level of comprehension they started with.”

Comprehension stays flat, but confidence soars. And when that confident-sounding output is sent to another person with no indication of how it was produced, the recipient has no way to know how much trust or attention to invest.

We already navigate this intuitively in the analog world. You read a handwritten letter differently from a form letter. You respond to a heartfelt apology differently from a corporate PR statement. In each case you’re calibrating based on how much of the sender invested of themselves in the message.

AI has blown up that calibration. A ChatGPT-generated email looks identical to a carefully composed one. An AI-drafted design doc reads the same as one built through weeks of analysis. When the surface is all you can see, you either give too much attention to content that doesn’t deserve it or you start distrusting everything. Both are bad.

Not All Words Are Equal

Deb Roy, writing in The Atlantic, recently argued that AI has decoupled speech from consequence for the first time in history. When language comes from a system that bears no vulnerability for what it says, the moral structure of language erodes. Promises hollow out, apologies become theater, and nobody stands behind the words because there’s nobody there to stand.

Roy’s got a point. But I think he’s looking for the break in the wrong joint. He sees consequence as something that lives in the speaker. I think it lives in the relationship between sender and recipient, and that relationship hinges not on what produced the words but on what the sender did with them before passing them along.

Also, Roy’s framework also treats all speech as if it carries the same moral weight. It doesn’t. Not all communication requires the same degree of human presence. When sending a personal email to a friend going through a difficult time, the human presence in the words is the entire point. You’re not conveying information. You’re communicating that you cared enough to sit down and find the right words. AI could write something more polished. It would mean less.

Compare that to writing a readme in an open source code repository. Here, the accuracy and clarity of the content is the entire point. Nobody reading the readme cares whether you personally typed every word. They care whether the installation instructions work. I’ve personally had AI write several of these recently. As long as I validate the content, nobody is harmed.

Most communication falls between those poles: a design doc for your team, Slack messages to a direct report, a proposal for a government contract. The right level of human involvement varies for each, and it’s not always obvious where the line falls.

Toward Responsible Disclosure

I propose a simple rule-of-thumb: never ask the consumer to invest more than you did as the producer.

If you spent five minutes generating a document, don’t send it with an ask that requires hours of review. If you haven’t read your own output, don’t ask someone else to read it for you. The effort you put into producing and refining what you send sets the ceiling for what you can reasonably ask of your audience. Violate this, and you’re not collaborating. You’re offloading.

How do you put this into practice? Disclosure.

Think of it as the communication equivalent of open source licensing. A license doesn’t tell you whether a product is good. It tells you what expectations and obligations attach to it. Disclosure does the same: it tells the recipient what kind of social contract they’re entering.

Three questions form the backbone:

What was your role?

In earlier work on AI-assisted creativity, I mapped human-AI relationships to distinct approaches: Author, Muse, Artisan, Debater, Creator, Curator. The same taxonomy applies to communication; when you send something to another person, it’s worth knowing (and disclosing) which mode you were operating in:

  • Author: entirely my words. No AI. Read this as direct human communication.
  • Creator: I developed this with AI assistance. The thinking is mine; AI helped me organize and express it. Read it as authored-with-assistance.
  • Artisan: AI generated a draft. I reshaped and validated it substantially. Read it as human refined.
  • Curator: AI generated this. I selected and organized but didn’t deeply rework it. Read it as a starting point, not a finished product.

Each is legitimate in the right context. What’s not legitimate is sending Curator-level work with Author-level expectations attached.

What have you validated?

“AI generated, I haven’t read it” and “AI generated, I’ve verified the technical claims but the prose is rough” and “AI assisted, but the architecture and recommendations are mine”—these are three very different things. Your reader needs to know which one they’re holding. Say so.

What do you expect from the recipient?

Match your ask to your effort. If you haven’t put in hours, don’t ask for hours. If what you need is a five-minute directional check, say that explicitly, don’t give a vague “please review.”

The person who sent me that document could have written: “I used AI to generate a first-pass design doc based on the requirements we discussed. I haven’t reviewed it in depth yet. Could you skim it and tell me if the general approach is sound before I invest time refining it?” That’s honest and proportional.

Connection Boundary

There’s one area where disclosure isn’t enough, where AI involvement changes what the words communicate, no matter how transparent you are about it: personal correspondence: emails to friends, texts to family, messages of condolence or congratulations or love.

These are acts where the human effort of finding words is itself the gift. When you write to someone who’s grieving, the struggle to say the right thing, the imperfection of what you manage, the fact that you sat with the blinking cursor and tried, that’s what communicates care. A perfectly worded AI-generated sympathy message is, in every sense that matters, less than a clumsy human one. Not because the words are worse. Because the act of writing is absent. Using AI here isn’t labor-saving. It’s a category error, like sending a robot to your friend’s funeral because it would deliver a better eulogy.

Call this dividing line the connection boundary. Below it, on the information-transfer side, AI involvement is a question of degree and disclosure. Above it, on the human-connection side, AI involvement eliminates the thing that makes the communication matter.

This doesn’t mean AI can’t play any role in personal communication. Using it to think through what you want to say, to consider whether your message might be misread, that keeps you in the loop. The line is between using AI to prepare yourself to communicate and using AI to communicate for you. The first is rehearsal. The second is outsourcing connection.

Counterfeit Collaboration

A question worth asking before you send unreviewed AI output to anyone: do you really need the other person’s input before your own review and refinement? Or are you simply avoiding the work?

Sending raw output without oversight likely violates the investment principle on its face. You’re asking someone else to do the thinking you skipped. If you genuinely need a gut check on direction before doing the hard work of refinement, say so explicitly. But this should be a rare exception, not a default workflow. If it’s becoming habit, the tool isn’t saving you time, it’s helping you avoid learning how to evaluate your own work.

And there’s something worse about this pattern than mere laziness. When you send unreviewed AI output and ask someone to “have a look and we can discuss,” you’re wearing the costume of collaboration while gutting its substance. A real discussion about a design or a proposal requires both parties to have done enough thinking to bring something to the table.

The language says partnership. The reality says: I need you to do this for me.

The Risk of Atrophy

Writing is not just a way to record thoughts. It’s a way to have them. The process of drafting, that’s where clarity gets forged: trying to fit a complex idea into a sentence and failing, then trying again. If I’ve learned anything from writing this blog, it’s that the thinking happens in the writing, not before it.

The person who routinely sends unreviewed AI output isn’t just issuing unfunded mandates for other people’s attention. They’re outsourcing their own professional development: losing the capacity to think critically about their domain because they’ve stopped doing the work that critical thinking requires.

Contemplation Is Not Dialogue

Everything above addresses what you owe the person who receives your words. But there’s another problem, a quieter problem, and it’s about what you think you’ve accomplished by the time you hit send.

A philosophically-minded colleague noted that one’s assumptions can be “reflected back in the form of a dialogue when the AI response is actually closer to memory.” Real dialogue, the kind many conceptions of truth depend on, is how we test whether our views hold up against people who see the world differently. Knowing what’s true has a social and ethical component that AI can mimic and support but cannot be.

This matters practically. You have a long, productive-feeling conversation with a chatbot. It pushes back on your reasoning. Offers counterarguments. Helps you refine your thinking. By the end, you feel like your ideas have been stress-tested. But against what? The AI challenged your inferences, maybe capably. What it couldn’t do is challenge your priors with the weight of a different life behind the challenge. It has no competing commitments, no lived experience that diverges from yours, no stake in the outcome. The conversation felt like dialogue, but it was more like structured contemplation.

None of this makes AI conversation worthless. I obviously don’t think that, given that this essay grew out of one. Structured contemplation has real value, and it can help prepare for the real dialogue when it happens. The danger is mistaking the rehearsal for the performance.

Call this the internal version of the investment principle. The external version says: don’t ask your reader to put in more than you did. The internal version says: be honest about what kind of thinking you did. Working through ideas with AI is real work. Defending those ideas to a skeptical colleague who brings different assumptions to the table is different work. Both are useful, but the former is no substitute for the latter.

We’re early in figuring out what human-AI communication actually is. The analogies we reach for—tool, collaborator, ghostwriter, mirror—each grab a piece of it while dropping the rest. Better language will come. Until it does, the investment principle can serve us well: simple enough to apply right now, and honest enough to keep the responsibility where it belongs.

This essay is a companion to a series on AI and creativity: Light From Light, By Their Fruits, and Spellcraft. Those essays explored frameworks for human-AI creative collaboration. This one extends that thinking into personal and business communication.

Over My Skis

Over My Skis

A few minutes ago I just published my first Go module. But here’s the thing: I don’t know Go. What madness is this?

Granted, I’ve been by myself most of this weekend, but in that time I’ve published 3 new public projects:

Plus, I have a fourth project in the works that’ll affect this blog materially. And I’ve built a sophisticated “AI Chief of Staff” for my own use (not published yet, but I will eventually in some form), and I’ve made a handful of smaller one-off utilities. And I’ve started spec-ing out a major project. And I’ve matured my local Claude Code configuration and spruced up my dotfiles. And, and, and.

It’s absolutely bonkers the throughput coding agents enable. Knee of the curve indeed.

Can’t Stop Won’t Stop

Can’t Stop Won’t Stop

Yup, it’s another post about Claude (and I don’t think it’ll be my last).

Yesterday I had a need to share an extensive discussion I had in the Claude desktop app with a person who isn’t (yet) an AI user. I didn’t want to share the conversation with a public link (even if an obscure one). So I built a tool that takes a Claude export archive and turns it into a navigable website. From there I simply printed the relevant conversation to PDF and sent that via email. Easy peasy.

And by “I built” I obviously mean I asked Claude Code to do it. Took less than an hour of wall clock time, only about 10 minutes of it requiring my active attention. The result is now on GitHub for your enjoyment. Amount of code I hand-wrote: zero. That includes the documentation.

There were other solutions out there, but I thought it’d take no more time just to build a tool to my exact specifications. And I was right. That was a revelation. The times they are a changin’, fellow software nerds. What I first mentioned as a far off possibility back in 2017 seems now a reality.

E Pluribus Plura: An Addendum

E Pluribus Plura: An Addendum

In Light From Light, I proposed that AI bears the imago hominum—the image of humanity—just as humans, in Tolkien’s framework, bear the imago Dei, the image of God. A reader with Latin might wonder why hominum rather than humanitatis. The latter is more euphonious. It rolls off the tongue more gracefully. So why the clunkier choice?

The distinction matters.

Imago humanitatis would mean “image of humanity”—humanity as abstraction, as essence, as Platonic form. It would suggest that AI bears the image of some unified concept: Humanity with a capital H, the distilled essence of what it means to be human.

But that’s not what an AI is. A large language model isn’t distilling the essence of humanity. It’s synthesizing patterns from millions of particular humans who wrote particular things. The training data isn’t a philosophical treatise on human nature; it’s an archive of human voices, messy and various and contradictory and specific. Reddit posts and academic papers. Poetry and product reviews. The profound and the banal, the beautiful and the ugly, all weighted by whatever patterns proved predictive.

Imago hominum keeps that plurality visible. It means “image of humans”—plural, specific, multitudinous. The model bears the image not of an abstraction but of a chorus. What’s reflected isn’t Human Nature but human voices, millions of them, averaged and weighted and transformed into something that can generate more.

This phrasing also captures something that humanitatis would obscure: those humans were real. They had names. They wrote specific things for specific reasons, and mostly didn’t consent to their words becoming training data. When we say the AI bears the image of humanity-as-abstraction, we lose sight of this. When we say it bears the image of humans, the ethical question remains visible. The image came from somewhere. It was made by someone. By many someones, in fact. The concerns about attribution and consent that swirl around AI-generated content are, in a sense, already encoded in the more honest Latin phrase. You can’t bear the image of humans without implicating those humans.

There’s an interesting asymmetry this creates with the original theological framework. Imago Dei refers to a singular God. Christian theology generally holds that God is unified; even the Trinity is “three persons, one substance.” Humans bear the image of this singular source.

But imago hominum refers to plural humans. AI doesn’t bear the image of one human creator the way humans bear the image of one divine Creator. It bears the image of the collective, the archive, the aggregated weight of human expression. The asymmetry is theologically suggestive: God is one; humanity is many. The image passed down carries that difference with it.

This also has implications for how we think about what AI “knows” or “believes.” If the model bore the imago humanitatis, we might expect it to reflect some coherent human essence: shared values, universal truths, the best of human thought refined and concentrated. But bearing the imago hominum, it reflects humans as they actually are: contradictory, contextual, shaped by when and where and for whom they were writing. The model doesn’t have a unified worldview because humans don’t have a unified worldview. It has patterns derived from a vast plurality.

None of this changes the practical framework. The approaches work the same whether you call it hominum or humanitatis. But precision in naming reveals precision in thinking. And in this case, the less elegant phrase is the more honest one.

Imago hominum. Image of humans. The light refracted through a million prisms, not distilled through one.

Gift, What Gift?

Gift, What Gift?

It’s Christmas today, yay! In that spirit, I have two applications to share with the world. The first one I’ll talk about today, the other later this week.

My family loves to play games of all varieties, especially on holidays. An old favorite is Pinochle, which I first learned from my grandparents in Michigan (pretty sure card playing is the only thing to do in the Midwest in winter). Almost 3 years ago I first spoke about creating an online score tracking tool for Pinochle, and released in an initial form last year. Today it’s finally useable. Check it out at onlinescoresheet.net, scoresheet.info, scoresheet.mobi, or scoresheet.space (I do like lots of domain names). You can also find the source code on GitHub (completely AI-written).

This is a very bad Pinochle hand

What got it over the hump from “fiddly prototype” to “ready for prime time” wasn’t the choice of development tool or a eureka moment on my part. It was actual usage by real users others than myself. Putting it out there, and then convincing my family members (across a couple generations and device types) to try it. Got enough feedback to make a handful of critical improvements, and while it could certainly be better, it’s perfectly usable and doesn’t have any glaring functional bugs.

Usage is a gift. Seek it out, and don’t take it for granted.

Different Kind Of Fluency

Different Kind Of Fluency

For something a little bit different, today’s post was written by a colleague of mine, Abby McQuade. Her decade-plus of experience as a buyer of government technology means she knows what she’s talking about. Remember, if you can’t win it you can’t work it. Ignore her advice to your peril.

How to Speak Government: Advice For Technology Vendors

When you’re selling technology solutions to government agencies, the way you communicate can make or break your deal. Government buyers operate in a unique environment with distinct pressures, constraints, and motivations. Here’s how to speak their language and position yourself as someone who truly understands their world.

Lead with Understanding, Not Features

Government employees face relentless criticism from all sides. They work long hours with limited budgets, dealing with unfunded mandates, changing regulations, and pressure from multiple stakeholder groups. When you walk into a meeting, acknowledge this reality.

Start by demonstrating that you understand government is fundamentally different from the private sector. Don’t show up acting like you know everything just because you’ve worked in tech or consulting. Instead, express genuine humility: “I know there’s a lot I’m going to need to learn about your specific challenges and constraints, even with my background.”

This positions you as a partner, not another vendor who thinks they have all the answers.

Show Respect for the Mission

Government workers aren’t in it for the money. They’re there because they care about serving constituents and making a difference in people’s lives. When presenting your solution, connect it explicitly to their mission.

Instead of just talking about efficiency gains or cost savings, frame your solution in terms of how it helps them better serve the people who depend on them. How does your technology help them fulfill their mandate more effectively? How does it reduce the burden on their already stretched staff so they can focus on the complex cases that really need human expertise?

Know Your Audience’s Constraints

Government agencies operate under specific statutory requirements and regulatory frameworks. Before your meeting, do your homework:

  • Read the governing statutes for the agency
  • Understand relevant state and federal regulations (like ADA requirements, housing law, labor regulations)
  • Know whether they’re fully state-funded or receive federal grants
  • Research their organizational structure and where your contact sits within it

When you reference this knowledge casually in conversation, it signals that you’ve done the work and you’re serious about understanding their unique environment.

Use the Right Terminology

Language matters in government. Small adjustments show you understand the culture:

  • Call the people they serve “constituents” or “residents,” not “customers” or “citizens”
  • Refer to agency leaders by their proper titles (“Commissioner,” “Secretary,” “Director”)
  • Learn the correct names and pronunciations for key officials
  • Understand the difference between departments, divisions, offices, and bureaus in their structure

Emphasize Communication and Transparency

Many government roles involve serving as a bridge between the administration, the legislature, and the public. If your solution has a communications component, emphasize how it helps agencies:

  • Keep constituents informed about their rights and available protections
  • Ensure the administration’s messaging reaches the people who need it
  • Reduce simple inquiries so staff can focus on complex cases requiring expertise
  • Maintain smooth connections between different levels of government (federal, state, local)

Good communication isn’t just nice to have in government—it directly reduces administrative burden and helps constituents access the services they’re entitled to.

Acknowledge the Interconnected Nature of Government

Nothing in government happens in a vacuum. Federal decisions impact state agencies, state legislatures affect executive branch operations, state policies influence local governments. Courts shape how agencies interpret their mandates.

When discussing implementation, show that you understand these interconnections. How will your solution work within their existing ecosystem? How does it account for the various stakeholders they need to coordinate with?

Position Yourself as an Ally

Remember that you’re speaking to people who are genuinely trying to do difficult, important work with insufficient resources. Your tone should convey:

  • Respect for the complexity of their work
  • Appreciation for their commitment to public service
  • Understanding that they face constraints you don’t deal with in the private sector
  • Recognition that they know their mission better than you do

Frame your solution as a way to make their hard job slightly easier, not as a magic fix for problems you assume they’re too incompetent to solve themselves.

Be Specific About Value in Their Context

When discussing your solution, be concrete about the value in terms that matter to government:

  • How does it help them meet statutory requirements?
  • How does it reduce the time staff spend on routine matters so they can focus on cases requiring judgment and expertise?
  • How does it improve their ability to serve constituents equitably?
  • How does it help them work more effectively with limited resources?

Avoid generic claims about “efficiency” or “innovation.” Instead, demonstrate specific understanding of their workflow and pain points. How does what you’re trying to sell to them make them more effective at fulfilling their mandates and mission?

Final Thoughts

Selling to government requires a fundamentally different approach than selling to private sector clients. Government buyers can spot vendors who don’t understand their world from a mile away. But when you take the time to truly learn their environment, speak their language, and position yourself as someone who respects the importance and difficulty of their work, you’ll stand out as a partner worth working with.

The key is simple: do your homework, show genuine respect, and remember that these are people doing critical work under challenging circumstances. Speak to them accordingly.

When Everyone Is Super

When Everyone Is Super

By name, at least, I’ve now worked at six different vendors of government solutions. There’s a fundamental tension that arises when building for state governments especially, that I’ve seen over and over again:

  • On one hand, vendors want to build products that can be deployed repeatedly across states for cost-effectiveness at scale and rapid per-project implementation
  • On the other hand, states have wildly-divergent policy landscapes and political realities, even in seemingly similar domains, demanding highly customized solutions

This tension creates numerous challenges. First, how should the system be architected to support configurability in the first place? It adds cost and risk to do so. And then, how should vendors approach communication of configurable features to a paying customer who doesn’t need the options? If you’re collaborating closely during development (as you should) it’s going to come up in planning and status meetings.

A case I’ve made that usually resonates is that having configurable options enables us as a vendor to maintain a (mostly) common codebase across customers. And that means when an improvement is made for any customer, everyone benefits. More succinctly: forks are bad. I can tell at least one tale of a high profile private customer that initially insisted on having their own radically customized copy of our company’s core product line, only to regret it a few years later when it took months to back port newer features to it.

Here’s a few considerations for product and engineering folks to consider when developing a solution for scale through repeated implementations:

  • First Project: have scalability in the back of your mind, but don’t fall prey to YAGNI and overbuilding otherwise you’ll price yourself out of your first customer; just do basic foundational configurability and focus primarily on your immediate requirements
  • Second Project: don’t make the mistake of thinking you can discount your pricing, you’ve yet to hit economy of scale, and you’ll need any budget saved from reuse to expand your configurability capabilities and begin thinking long-term scaling strategy
  • Third Project: this all-important moment is where you can now truly begin thinking about productization, having full configurability (going beyond mere look and feel to business logic) and rapid, repeatable deployments
  • Fourth Project: now you should be reaping the efficiency benefits of your configurability and repeatability; if you haven’t yet, act fast and make investments at speed, or it’ll be too late

Finally, an anti-pattern:

if customer == 'Customer 1':
    doAThing()
elif customer == 'Customer 2':
    doADifferentThing()
elif customer == 'Customer 3':
    doYetAnotherThing()

The above might be fine for your first couple projects, but if it’s still in your code by project 3 or 4, you’re doomed.

Into The Deep End

Into The Deep End

Something I appreciated about working at AWS is that consultants were not thrown to the wolves without training, the most useful of which was a series of mock customer meetings that modeled likely real-world interactions with both business and IT stakeholders.

Proverbial soft skills are difficult to teach, but they’re critically important, and this kind of simulated experience does a decent job prepping new hires for the challenges they’ll face. And framing it around an actual project might just result in useful artifacts or marketable solutions. Win win!

I’m hoping to eventually replicate this practice at my new gig, and thus wanted to capture a rough online of how I’d structure such a training. Perhaps it makes a case for my prior post?

Participants

  • Project Team (Builders)
  • Customer Team (Customers) – Proxy for a real customer
  • Coach (cannot be on either Project or Customer teams)
  • Sponsor (can be same as coach, but usually will be manager / executive)

Planning Work

(drafted by Project Team, reviewed by Coach, and approved by Sponsor)

  • Define relevance to company objectives
  • Identify participants
  • Capture project title and description
  • Identify objectives
    • Skills to be learned
    • Artifacts to be built
  • Identify dependencies / deadlines
  • Scope guardrails for level of effort
  • Template
    • Participants: (identify every role)
    • Title: (one liner)
    • Description: (should clearly connect to company objectives)
    • Expected Outcomes: (final artifacts should be listed here, but ends, not means)
    • Skills To Be Learned: (be as specific as practical, acknowledging some flexibility may be required)

Customer Meeting Guidelines

  •  All are scheduled and led by the Builders
  •  All are attended by the Builders & Customers
  •  Customer team should “play the part” (within reason)
  •  Coach and Sponsor are optional but active participation should be minimized
  • Meetings are held synchronously and are recorded

Three Phases

1.  Discover

  • Builders review pre-work document
  • Builders develop agenda and questions for discovery session
  • Coach reviews agenda and questions (can be async) and gives go-ahead for meeting
  • Builders schedule and lead discovery meeting
  • Builders capture notes and document requirements
  • Builders follow up with Customers post-meeting to gather more info as needed
  • Builders meet with coach to discuss meeting, review artifacts, and plan next steps (can be async)

2.  Design

  • Builders design architecture/approach based on discovered requirements
  • Builders capture design and a build plan in a scope document
  • Coach reviews scope document (can be async) and gives go-ahead for meeting
  • Builders schedule and lead design review meeting
  • Customers give go-ahead to proceed (or iterate with Builders until ready)
  • Builders meet with coach to discuss design meeting, review artifacts, and plan next steps (can be async)

3.  Demonstrate

  • Builders execute on the build plan
  • Builders engage Customers and Coach as needed to address blockers (can by async)
  • Coach reviews completed artifacts (can be async) and gives go-ahead for meeting
  • Builders schedule and lead demonstration meeting
  • Customers give go-ahead that build is complete (or iterate with Builders until ready)
  • Builders address feedback and deliver final artifacts to Customers
  • Builders meet with coach to discuss demo meeting, review artifacts, and capture final thoughts / next steps