Tag: Customer Obsession

The Truth Sets You Free

The Truth Sets You Free

Editor’s Note: I’m going to be going through some old drafts and either get them published or delete them. This beauty is from a few years back when I was trying to reap at least some benefit from visiting my least favorite city, Las Vegas. Thank goodness I can now get Marriott Bonvoy points at any MGM hotel instead.

This is an impressively bad error message I ran into while trying to create an account on the MGM Resorts website. The input screen even had instructions on password format, so the framework was there to do the right thing, but they didn’t mention length, nor did they validate it in the UI.

Of course I had to dig into Developer Tools to see what was going on. A useful error message was right there. C’mon MGM, do better.

Also, a 20 character limit for passwords? That’s dumb.

Imagining Dragons

Imagining Dragons

Editor’s Note: I wrote the first draft of this post back in December, before I’d truly discovered Claude Code. Not sure it’d play out this same way now, several months later. I really ought to get back to it and find out.

I used Amazon Kiro to build a thing that I hope to publish eventually. But in the meantime, I’ll share an anecdote from my experience with it.

The spec-driven development model makes a lot of sense to me. In a few minutes with Kiro, I thought I had a solid description of what I wanted to build. Kicked off the tasks, let things cook for a while, and after a bit, I was told things were ready to test.

Not quite sure where to begin, I asked for a full end-to-end walkthrough in the README. The model wrote a great one with detailed, step-by-step command line instructions. I was excited to try it out. Opened up my terminal, Ctrl-C Ctrl-V-ed the first command, and… error: option not supported.

Tried another one, same thing. Weird.

Did a bit more investigation and came to a shocking realization: Kiro had hallucinated the entire walkthrough.

At first I was upset, but in truth, it was okay! Because I just told Kiro to read the README in detail, and turn the walkthrough into reality by building all the stuff it had invented, and retroactively put it in the spec.

Legitimate approach? Perhaps. But next time, maybe I’ll have it build the experience first, and then the code? Work backwards from the customer, anyone?

All Around You

All Around You

Ever need to run a bunch of parallel bash commands with the same executable but different arguments? And be able to watch their stdout streams without them being intermingled? And get a brief report on successes and failures at the end of it all?

Yeah, me too.

Had Claude Code whip me up this script just now. Works like a charm.

Not entirely relevant, but did the above while three other coding sessions were summarizing my past six months of activity, refining a podcast script, and putting together a set of questions for an RFP response.

While on an airplane.

The world can be a messed up place at times, but it’s still full of magic and miracles.

Unfunded Mandates

Unfunded Mandates

This essay was developed through conversation with Anthropic’s Claude, drawing on dialogue with colleagues whose observations are quoted with permission. The ideas, arguments, structure, and final language are mine. In the taxonomy this essay proposes: Creator mode, reviewed and owned.

Someone sent me a design document recently. It was long enough to require several hours of careful review, yet unmistakably AI-generated and just as clearly unreviewed by the person who sent it. My best guess was that the prompt was written in about five minutes, ten tops.

I was being asked to invest hours evaluating a document its author had invested minutes producing. They weren’t being malicious. They were doing what the tools make easy: generate at volume, ship immediately, and let someone downstream sort it out. But here’s the thing about frictionless production: it doesn’t eliminate the work. It relocates it. When this is done without acknowledgment, what arrives isn’t collaboration. It’s an unfunded mandate for someone else’s cognitive labor.

The Confidence Gap

A friend of mine put the problem this way: AI “allows people who have never even thought of a subject matter to opine as if they were well-versed, while maintaining the same level of comprehension they started with.”

Comprehension stays flat, but confidence soars. And when that confident-sounding output is sent to another person with no indication of how it was produced, the recipient has no way to know how much trust or attention to invest.

We already navigate this intuitively in the analog world. You read a handwritten letter differently from a form letter. You respond to a heartfelt apology differently from a corporate PR statement. In each case you’re calibrating based on how much of the sender invested of themselves in the message.

AI has blown up that calibration. A ChatGPT-generated email looks identical to a carefully composed one. An AI-drafted design doc reads the same as one built through weeks of analysis. When the surface is all you can see, you either give too much attention to content that doesn’t deserve it or you start distrusting everything. Both are bad.

Not All Words Are Equal

Deb Roy, writing in The Atlantic, recently argued that AI has decoupled speech from consequence for the first time in history. When language comes from a system that bears no vulnerability for what it says, the moral structure of language erodes. Promises hollow out, apologies become theater, and nobody stands behind the words because there’s nobody there to stand.

Roy’s got a point. But I think he’s looking for the break in the wrong joint. He sees consequence as something that lives in the speaker. I think it lives in the relationship between sender and recipient, and that relationship hinges not on what produced the words but on what the sender did with them before passing them along.

Also, Roy’s framework also treats all speech as if it carries the same moral weight. It doesn’t. Not all communication requires the same degree of human presence. When sending a personal email to a friend going through a difficult time, the human presence in the words is the entire point. You’re not conveying information. You’re communicating that you cared enough to sit down and find the right words. AI could write something more polished. It would mean less.

Compare that to writing a readme in an open source code repository. Here, the accuracy and clarity of the content is the entire point. Nobody reading the readme cares whether you personally typed every word. They care whether the installation instructions work. I’ve personally had AI write several of these recently. As long as I validate the content, nobody is harmed.

Most communication falls between those poles: a design doc for your team, Slack messages to a direct report, a proposal for a government contract. The right level of human involvement varies for each, and it’s not always obvious where the line falls.

Toward Responsible Disclosure

I propose a simple rule-of-thumb: never ask the consumer to invest more than you did as the producer.

If you spent five minutes generating a document, don’t send it with an ask that requires hours of review. If you haven’t read your own output, don’t ask someone else to read it for you. The effort you put into producing and refining what you send sets the ceiling for what you can reasonably ask of your audience. Violate this, and you’re not collaborating. You’re offloading.

How do you put this into practice? Disclosure.

Think of it as the communication equivalent of open source licensing. A license doesn’t tell you whether a product is good. It tells you what expectations and obligations attach to it. Disclosure does the same: it tells the recipient what kind of social contract they’re entering.

Three questions form the backbone:

What was your role?

In earlier work on AI-assisted creativity, I mapped human-AI relationships to distinct approaches: Author, Muse, Artisan, Debater, Creator, Curator. The same taxonomy applies to communication; when you send something to another person, it’s worth knowing (and disclosing) which mode you were operating in:

  • Author: entirely my words. No AI. Read this as direct human communication.
  • Creator: I developed this with AI assistance. The thinking is mine; AI helped me organize and express it. Read it as authored-with-assistance.
  • Artisan: AI generated a draft. I reshaped and validated it substantially. Read it as human refined.
  • Curator: AI generated this. I selected and organized but didn’t deeply rework it. Read it as a starting point, not a finished product.

Each is legitimate in the right context. What’s not legitimate is sending Curator-level work with Author-level expectations attached.

What have you validated?

“AI generated, I haven’t read it” and “AI generated, I’ve verified the technical claims but the prose is rough” and “AI assisted, but the architecture and recommendations are mine”—these are three very different things. Your reader needs to know which one they’re holding. Say so.

What do you expect from the recipient?

Match your ask to your effort. If you haven’t put in hours, don’t ask for hours. If what you need is a five-minute directional check, say that explicitly, don’t give a vague “please review.”

The person who sent me that document could have written: “I used AI to generate a first-pass design doc based on the requirements we discussed. I haven’t reviewed it in depth yet. Could you skim it and tell me if the general approach is sound before I invest time refining it?” That’s honest and proportional.

Connection Boundary

There’s one area where disclosure isn’t enough, where AI involvement changes what the words communicate, no matter how transparent you are about it: personal correspondence: emails to friends, texts to family, messages of condolence or congratulations or love.

These are acts where the human effort of finding words is itself the gift. When you write to someone who’s grieving, the struggle to say the right thing, the imperfection of what you manage, the fact that you sat with the blinking cursor and tried, that’s what communicates care. A perfectly worded AI-generated sympathy message is, in every sense that matters, less than a clumsy human one. Not because the words are worse. Because the act of writing is absent. Using AI here isn’t labor-saving. It’s a category error, like sending a robot to your friend’s funeral because it would deliver a better eulogy.

Call this dividing line the connection boundary. Below it, on the information-transfer side, AI involvement is a question of degree and disclosure. Above it, on the human-connection side, AI involvement eliminates the thing that makes the communication matter.

This doesn’t mean AI can’t play any role in personal communication. Using it to think through what you want to say, to consider whether your message might be misread, that keeps you in the loop. The line is between using AI to prepare yourself to communicate and using AI to communicate for you. The first is rehearsal. The second is outsourcing connection.

Counterfeit Collaboration

A question worth asking before you send unreviewed AI output to anyone: do you really need the other person’s input before your own review and refinement? Or are you simply avoiding the work?

Sending raw output without oversight likely violates the investment principle on its face. You’re asking someone else to do the thinking you skipped. If you genuinely need a gut check on direction before doing the hard work of refinement, say so explicitly. But this should be a rare exception, not a default workflow. If it’s becoming habit, the tool isn’t saving you time, it’s helping you avoid learning how to evaluate your own work.

And there’s something worse about this pattern than mere laziness. When you send unreviewed AI output and ask someone to “have a look and we can discuss,” you’re wearing the costume of collaboration while gutting its substance. A real discussion about a design or a proposal requires both parties to have done enough thinking to bring something to the table.

The language says partnership. The reality says: I need you to do this for me.

The Risk of Atrophy

Writing is not just a way to record thoughts. It’s a way to have them. The process of drafting, that’s where clarity gets forged: trying to fit a complex idea into a sentence and failing, then trying again. If I’ve learned anything from writing this blog, it’s that the thinking happens in the writing, not before it.

The person who routinely sends unreviewed AI output isn’t just issuing unfunded mandates for other people’s attention. They’re outsourcing their own professional development: losing the capacity to think critically about their domain because they’ve stopped doing the work that critical thinking requires.

Contemplation Is Not Dialogue

Everything above addresses what you owe the person who receives your words. But there’s another problem, a quieter problem, and it’s about what you think you’ve accomplished by the time you hit send.

A philosophically-minded colleague noted that one’s assumptions can be “reflected back in the form of a dialogue when the AI response is actually closer to memory.” Real dialogue, the kind many conceptions of truth depend on, is how we test whether our views hold up against people who see the world differently. Knowing what’s true has a social and ethical component that AI can mimic and support but cannot be.

This matters practically. You have a long, productive-feeling conversation with a chatbot. It pushes back on your reasoning. Offers counterarguments. Helps you refine your thinking. By the end, you feel like your ideas have been stress-tested. But against what? The AI challenged your inferences, maybe capably. What it couldn’t do is challenge your priors with the weight of a different life behind the challenge. It has no competing commitments, no lived experience that diverges from yours, no stake in the outcome. The conversation felt like dialogue, but it was more like structured contemplation.

None of this makes AI conversation worthless. I obviously don’t think that, given that this essay grew out of one. Structured contemplation has real value, and it can help prepare for the real dialogue when it happens. The danger is mistaking the rehearsal for the performance.

Call this the internal version of the investment principle. The external version says: don’t ask your reader to put in more than you did. The internal version says: be honest about what kind of thinking you did. Working through ideas with AI is real work. Defending those ideas to a skeptical colleague who brings different assumptions to the table is different work. Both are useful, but the former is no substitute for the latter.

We’re early in figuring out what human-AI communication actually is. The analogies we reach for—tool, collaborator, ghostwriter, mirror—each grab a piece of it while dropping the rest. Better language will come. Until it does, the investment principle can serve us well: simple enough to apply right now, and honest enough to keep the responsibility where it belongs.

This essay is a companion to a series on AI and creativity: Light From Light, By Their Fruits, Spellcraft, and E Pluribus Plura. Those essays explored frameworks for human-AI creative collaboration. This one extends that thinking into personal and business communication.

Over My Skis

Over My Skis

A few minutes ago I just published my first Go module. But here’s the thing: I don’t know Go. What madness is this?

Granted, I’ve been by myself most of this weekend, but in that time I’ve published 3 new public projects:

Plus, I have a fourth project in the works that’ll affect this blog materially. And I’ve built a sophisticated “AI Chief of Staff” for my own use (not published yet, but I will eventually in some form), and I’ve made a handful of smaller one-off utilities. And I’ve started spec-ing out a major project. And I’ve matured my local Claude Code configuration and spruced up my dotfiles. And, and, and.

It’s absolutely bonkers the throughput coding agents enable. Knee of the curve indeed.

Can’t Stop Won’t Stop

Can’t Stop Won’t Stop

Yup, it’s another post about Claude (and I don’t think it’ll be my last).

Yesterday I had a need to share an extensive discussion I had in the Claude desktop app with a person who isn’t (yet) an AI user. I didn’t want to share the conversation with a public link (even if an obscure one). So I built a tool that takes a Claude export archive and turns it into a navigable website. From there I simply printed the relevant conversation to PDF and sent that via email. Easy peasy.

And by “I built” I obviously mean I asked Claude Code to do it. Took less than an hour of wall clock time, only about 10 minutes of it requiring my active attention. The result is now on GitHub for your enjoyment. Amount of code I hand-wrote: zero. That includes the documentation.

There were other solutions out there, but I thought it’d take no more time just to build a tool to my exact specifications. And I was right. That was a revelation. The times they are a changin’, fellow software nerds. What I first mentioned as a far off possibility back in 2017 seems now a reality.

E Pluribus Plura: An Addendum

E Pluribus Plura: An Addendum

In Light From Light, I proposed that AI bears the imago hominum—the image of humanity—just as humans, in Tolkien’s framework, bear the imago Dei, the image of God. A reader with Latin might wonder why hominum rather than humanitatis. The latter is more euphonious. It rolls off the tongue more gracefully. So why the clunkier choice?

The distinction matters.

Imago humanitatis would mean “image of humanity”—humanity as abstraction, as essence, as Platonic form. It would suggest that AI bears the image of some unified concept: Humanity with a capital H, the distilled essence of what it means to be human.

But that’s not what an AI is. A large language model isn’t distilling the essence of humanity. It’s synthesizing patterns from millions of particular humans who wrote particular things. The training data isn’t a philosophical treatise on human nature; it’s an archive of human voices, messy and various and contradictory and specific. Reddit posts and academic papers. Poetry and product reviews. The profound and the banal, the beautiful and the ugly, all weighted by whatever patterns proved predictive.

Imago hominum keeps that plurality visible. It means “image of humans”—plural, specific, multitudinous. The model bears the image not of an abstraction but of a chorus. What’s reflected isn’t Human Nature but human voices, millions of them, averaged and weighted and transformed into something that can generate more.

This phrasing also captures something that humanitatis would obscure: those humans were real. They had names. They wrote specific things for specific reasons, and mostly didn’t consent to their words becoming training data. When we say the AI bears the image of humanity-as-abstraction, we lose sight of this. When we say it bears the image of humans, the ethical question remains visible. The image came from somewhere. It was made by someone. By many someones, in fact. The concerns about attribution and consent that swirl around AI-generated content are, in a sense, already encoded in the more honest Latin phrase. You can’t bear the image of humans without implicating those humans.

There’s an interesting asymmetry this creates with the original theological framework. Imago Dei refers to a singular God. Christian theology generally holds that God is unified; even the Trinity is “three persons, one substance.” Humans bear the image of this singular source.

But imago hominum refers to plural humans. AI doesn’t bear the image of one human creator the way humans bear the image of one divine Creator. It bears the image of the collective, the archive, the aggregated weight of human expression. The asymmetry is theologically suggestive: God is one; humanity is many. The image passed down carries that difference with it.

This also has implications for how we think about what AI “knows” or “believes.” If the model bore the imago humanitatis, we might expect it to reflect some coherent human essence: shared values, universal truths, the best of human thought refined and concentrated. But bearing the imago hominum, it reflects humans as they actually are: contradictory, contextual, shaped by when and where and for whom they were writing. The model doesn’t have a unified worldview because humans don’t have a unified worldview. It has patterns derived from a vast plurality.

None of this changes the practical framework. The approaches work the same whether you call it hominum or humanitatis. But precision in naming reveals precision in thinking. And in this case, the less elegant phrase is the more honest one.

Imago hominum. Image of humans. The light refracted through a million prisms, not distilled through one.


This essay is the final in a series of ongoing exploration of AI and creative collaboration, the prior ones being Light From Light, By Their Fruits, and Spellcraft.

Gift, What Gift?

Gift, What Gift?

It’s Christmas today, yay! In that spirit, I have two applications to share with the world. The first one I’ll talk about today, the other later this week.

My family loves to play games of all varieties, especially on holidays. An old favorite is Pinochle, which I first learned from my grandparents in Michigan (pretty sure card playing is the only thing to do in the Midwest in winter). Almost 3 years ago I first spoke about creating an online score tracking tool for Pinochle, and released in an initial form last year. Today it’s finally useable. Check it out at onlinescoresheet.net, scoresheet.info, scoresheet.mobi, or scoresheet.space (I do like lots of domain names). You can also find the source code on GitHub (completely AI-written).

This is a very bad Pinochle hand

What got it over the hump from “fiddly prototype” to “ready for prime time” wasn’t the choice of development tool or a eureka moment on my part. It was actual usage by real users others than myself. Putting it out there, and then convincing my family members (across a couple generations and device types) to try it. Got enough feedback to make a handful of critical improvements, and while it could certainly be better, it’s perfectly usable and doesn’t have any glaring functional bugs.

Usage is a gift. Seek it out, and don’t take it for granted.