Author: Jud

Technologist interested in building both systems and organizations that are secure, scaleable, cost-effective, and most of all, good for humanity.
Unfunded Mandates

Unfunded Mandates

This essay was developed through conversation with Anthropic’s Claude, drawing on dialogue with colleagues whose observations are quoted with permission. The ideas, arguments, structure, and final language are mine. In the taxonomy this essay proposes: Creator mode, reviewed and owned.

Someone sent me a design document recently. It was long enough to require several hours of careful review, yet unmistakably AI-generated and just as clearly unreviewed by the person who sent it. My best guess was that the prompt was written in about five minutes, ten tops.

I was being asked to invest hours evaluating a document its author had invested minutes producing. They weren’t being malicious. They were doing what the tools make easy: generate at volume, ship immediately, and let someone downstream sort it out. But here’s the thing about frictionless production: it doesn’t eliminate the work. It relocates it. When this is done without acknowledgment, what arrives isn’t collaboration. It’s an unfunded mandate for someone else’s cognitive labor.

The Confidence Gap

A friend of mine put the problem this way: AI “allows people who have never even thought of a subject matter to opine as if they were well-versed, while maintaining the same level of comprehension they started with.”

Comprehension stays flat, but confidence soars. And when that confident-sounding output is sent to another person with no indication of how it was produced, the recipient has no way to know how much trust or attention to invest.

We already navigate this intuitively in the analog world. You read a handwritten letter differently from a form letter. You respond to a heartfelt apology differently from a corporate PR statement. In each case you’re calibrating based on how much of the sender invested of themselves in the message.

AI has blown up that calibration. A ChatGPT-generated email looks identical to a carefully composed one. An AI-drafted design doc reads the same as one built through weeks of analysis. When the surface is all you can see, you either give too much attention to content that doesn’t deserve it or you start distrusting everything. Both are bad.

Not All Words Are Equal

Deb Roy, writing in The Atlantic, recently argued that AI has decoupled speech from consequence for the first time in history. When language comes from a system that bears no vulnerability for what it says, the moral structure of language erodes. Promises hollow out, apologies become theater, and nobody stands behind the words because there’s nobody there to stand.

Roy’s got a point. But I think he’s looking for the break in the wrong joint. He sees consequence as something that lives in the speaker. I think it lives in the relationship between sender and recipient, and that relationship hinges not on what produced the words but on what the sender did with them before passing them along.

Also, Roy’s framework also treats all speech as if it carries the same moral weight. It doesn’t. Not all communication requires the same degree of human presence. When sending a personal email to a friend going through a difficult time, the human presence in the words is the entire point. You’re not conveying information. You’re communicating that you cared enough to sit down and find the right words. AI could write something more polished. It would mean less.

Compare that to writing a readme in an open source code repository. Here, the accuracy and clarity of the content is the entire point. Nobody reading the readme cares whether you personally typed every word. They care whether the installation instructions work. I’ve personally had AI write several of these recently. As long as I validate the content, nobody is harmed.

Most communication falls between those poles: a design doc for your team, Slack messages to a direct report, a proposal for a government contract. The right level of human involvement varies for each, and it’s not always obvious where the line falls.

Toward Responsible Disclosure

I propose a simple rule-of-thumb: never ask the consumer to invest more than you did as the producer.

If you spent five minutes generating a document, don’t send it with an ask that requires hours of review. If you haven’t read your own output, don’t ask someone else to read it for you. The effort you put into producing and refining what you send sets the ceiling for what you can reasonably ask of your audience. Violate this, and you’re not collaborating. You’re offloading.

How do you put this into practice? Disclosure.

Think of it as the communication equivalent of open source licensing. A license doesn’t tell you whether a product is good. It tells you what expectations and obligations attach to it. Disclosure does the same: it tells the recipient what kind of social contract they’re entering.

Three questions form the backbone:

What was your role?

In earlier work on AI-assisted creativity, I mapped human-AI relationships to distinct approaches: Author, Muse, Artisan, Debater, Creator, Curator. The same taxonomy applies to communication; when you send something to another person, it’s worth knowing (and disclosing) which mode you were operating in:

  • Author: entirely my words. No AI. Read this as direct human communication.
  • Creator: I developed this with AI assistance. The thinking is mine; AI helped me organize and express it. Read it as authored-with-assistance.
  • Artisan: AI generated a draft. I reshaped and validated it substantially. Read it as human refined.
  • Curator: AI generated this. I selected and organized but didn’t deeply rework it. Read it as a starting point, not a finished product.

Each is legitimate in the right context. What’s not legitimate is sending Curator-level work with Author-level expectations attached.

What have you validated?

“AI generated, I haven’t read it” and “AI generated, I’ve verified the technical claims but the prose is rough” and “AI assisted, but the architecture and recommendations are mine”—these are three very different things. Your reader needs to know which one they’re holding. Say so.

What do you expect from the recipient?

Match your ask to your effort. If you haven’t put in hours, don’t ask for hours. If what you need is a five-minute directional check, say that explicitly, don’t give a vague “please review.”

The person who sent me that document could have written: “I used AI to generate a first-pass design doc based on the requirements we discussed. I haven’t reviewed it in depth yet. Could you skim it and tell me if the general approach is sound before I invest time refining it?” That’s honest and proportional.

Connection Boundary

There’s one area where disclosure isn’t enough, where AI involvement changes what the words communicate, no matter how transparent you are about it: personal correspondence: emails to friends, texts to family, messages of condolence or congratulations or love.

These are acts where the human effort of finding words is itself the gift. When you write to someone who’s grieving, the struggle to say the right thing, the imperfection of what you manage, the fact that you sat with the blinking cursor and tried, that’s what communicates care. A perfectly worded AI-generated sympathy message is, in every sense that matters, less than a clumsy human one. Not because the words are worse. Because the act of writing is absent. Using AI here isn’t labor-saving. It’s a category error, like sending a robot to your friend’s funeral because it would deliver a better eulogy.

Call this dividing line the connection boundary. Below it, on the information-transfer side, AI involvement is a question of degree and disclosure. Above it, on the human-connection side, AI involvement eliminates the thing that makes the communication matter.

This doesn’t mean AI can’t play any role in personal communication. Using it to think through what you want to say, to consider whether your message might be misread, that keeps you in the loop. The line is between using AI to prepare yourself to communicate and using AI to communicate for you. The first is rehearsal. The second is outsourcing connection.

Counterfeit Collaboration

A question worth asking before you send unreviewed AI output to anyone: do you really need the other person’s input before your own review and refinement? Or are you simply avoiding the work?

Sending raw output without oversight likely violates the investment principle on its face. You’re asking someone else to do the thinking you skipped. If you genuinely need a gut check on direction before doing the hard work of refinement, say so explicitly. But this should be a rare exception, not a default workflow. If it’s becoming habit, the tool isn’t saving you time, it’s helping you avoid learning how to evaluate your own work.

And there’s something worse about this pattern than mere laziness. When you send unreviewed AI output and ask someone to “have a look and we can discuss,” you’re wearing the costume of collaboration while gutting its substance. A real discussion about a design or a proposal requires both parties to have done enough thinking to bring something to the table.

The language says partnership. The reality says: I need you to do this for me.

The Risk of Atrophy

Writing is not just a way to record thoughts. It’s a way to have them. The process of drafting, that’s where clarity gets forged: trying to fit a complex idea into a sentence and failing, then trying again. If I’ve learned anything from writing this blog, it’s that the thinking happens in the writing, not before it.

The person who routinely sends unreviewed AI output isn’t just issuing unfunded mandates for other people’s attention. They’re outsourcing their own professional development: losing the capacity to think critically about their domain because they’ve stopped doing the work that critical thinking requires.

Contemplation Is Not Dialogue

Everything above addresses what you owe the person who receives your words. But there’s another problem, a quieter problem, and it’s about what you think you’ve accomplished by the time you hit send.

A philosophically-minded colleague noted that one’s assumptions can be “reflected back in the form of a dialogue when the AI response is actually closer to memory.” Real dialogue, the kind many conceptions of truth depend on, is how we test whether our views hold up against people who see the world differently. Knowing what’s true has a social and ethical component that AI can mimic and support but cannot be.

This matters practically. You have a long, productive-feeling conversation with a chatbot. It pushes back on your reasoning. Offers counterarguments. Helps you refine your thinking. By the end, you feel like your ideas have been stress-tested. But against what? The AI challenged your inferences, maybe capably. What it couldn’t do is challenge your priors with the weight of a different life behind the challenge. It has no competing commitments, no lived experience that diverges from yours, no stake in the outcome. The conversation felt like dialogue, but it was more like structured contemplation.

None of this makes AI conversation worthless. I obviously don’t think that, given that this essay grew out of one. Structured contemplation has real value, and it can help prepare for the real dialogue when it happens. The danger is mistaking the rehearsal for the performance.

Call this the internal version of the investment principle. The external version says: don’t ask your reader to put in more than you did. The internal version says: be honest about what kind of thinking you did. Working through ideas with AI is real work. Defending those ideas to a skeptical colleague who brings different assumptions to the table is different work. Both are useful, but the former is no substitute for the latter.

We’re early in figuring out what human-AI communication actually is. The analogies we reach for—tool, collaborator, ghostwriter, mirror—each grab a piece of it while dropping the rest. Better language will come. Until it does, the investment principle can serve us well: simple enough to apply right now, and honest enough to keep the responsibility where it belongs.

This essay is a companion to a series on AI and creativity: Light From Light, By Their Fruits, and Spellcraft. Those essays explored frameworks for human-AI creative collaboration. This one extends that thinking into personal and business communication.

Knee of the Curve

Knee of the Curve

There’s so much ink being spilled about AI that it’s hard to keep track of it all. But I’m doing my best to stay connected to the important stuff, or at least things most relevant to my job.

Here’s a list of articles and essays that I’ve read recently that I found memorable for one reason or another, roughly in descending order of broad relevance:

The first three are especially powerful. If you’re reading this, read them instead.

Coming Up For Air

Coming Up For Air

Believe it or not, I’m not going to say anything about Claude today.

I wrote a post a couple years ago about statistics I tracked while doing daily crossword puzzles. I took a couple years off, but last year I was back at it, this time using a calendar from the New York Times.

The NYT crosswords are supposed to get harder as the week goes on, with Monday being easiest, and weekend ones being the most difficult. I wanted to prove that out, so I noted my average solve time (capped at 30 minutes) on every puzzle, and then computed an average solve time for each day. The results are below:

Lo and behold, my experience aligns perfectly! I thought that was cool.

Over My Skis

Over My Skis

A few minutes ago I just published my first Go module. But here’s the thing: I don’t know Go. What madness is this?

Granted, I’ve been by myself most of this weekend, but in that time I’ve published 3 new public projects:

Plus, I have a fourth project in the works that’ll affect this blog materially. And I’ve built a sophisticated “AI Chief of Staff” for my own use (not published yet, but I will eventually in some form), and I’ve made a handful of smaller one-off utilities. And I’ve started spec-ing out a major project. And I’ve matured my local Claude Code configuration and spruced up my dotfiles. And, and, and.

It’s absolutely bonkers the throughput coding agents enable. Knee of the curve indeed.

No Seriously, Don’t Stop

No Seriously, Don’t Stop

I’m starting to feel a compulsion to keep as many Claude Code terminals running as I possibly can. Ready for lunch? Try to kick off a large implementation. Bathroom break needed? Run a research project in parallel. Bedtime? Don’t you dare until you have your swarm of agent teams configured with CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS and everything allowed thanks to --dangerously-skip-permissions.

Time to git add --all && git commit -m "yolo" && git push -f up in this business!

And is it time to graduate to Gas Town? I’m already using beads to good effect, and I’ve now reached Stage 6 on the Steve’s Evolution of the Developer chart.

Can’t Stop Won’t Stop

Can’t Stop Won’t Stop

Yup, it’s another post about Claude (and I don’t think it’ll be my last).

Yesterday I had a need to share an extensive discussion I had in the Claude desktop app with a person who isn’t (yet) an AI user. I didn’t want to share the conversation with a public link (even if an obscure one). So I built a tool that takes a Claude export archive and turns it into a navigable website. From there I simply printed the relevant conversation to PDF and sent that via email. Easy peasy.

And by “I built” I obviously mean I asked Claude Code to do it. Took less than an hour of wall clock time, only about 10 minutes of it requiring my active attention. The result is now on GitHub for your enjoyment. Amount of code I hand-wrote: zero. That includes the documentation.

There were other solutions out there, but I thought it’d take no more time just to build a tool to my exact specifications. And I was right. That was a revelation. The times they are a changin’, fellow software nerds. What I first mentioned as a far off possibility back in 2017 seems now a reality.

Swords Are No More Use Here

Swords Are No More Use Here

I’ve been spending an awful lot of time with Claude Code as of late (including passing the Claude Code in Action course, because I do love badges).

The lingua franca of coding agents seems to be Markdown, which is totally cool, I’m a big fan. But in my experience to date (which involves a bunch of Spec Kit), Claude models don’t write syntactically correct Markdown every time. Admittedly I haven’t tried other models, and Opus 4.6 just came out yesterday so maybe things will improve, but for now there seem to be a couple consistent problems:

  • Emphasis is used in place of properly hierarchical headers (violates MD036)
  • Lists are not proceeded by a line break (which will cause the list items to run together when rendered)
  • Items that should be on multiple lines do not have an extra line break between them (again, causing them to render as a single line)

I got tired of these issues, and thus created a simple set of instructions to tell Claude not to do the above, and a hook to automatically lint all markdown files and tell the model to fix any issues that slip through. Dropped these into my global configuration, and so far, so good!

Here’s my CLAUDE.md:

## Markdown Formatting

When generating or editing markdown files, always follow these rules for proper rendering:

- Use `-` for unordered lists, `1.` for ordered lists (not `*` or other markers)
- Include a blank line before bulleted (`-`) or numbered (`1.`) lists
- Include a blank line before fenced code blocks (```)
- Include a blank line before headers (`#`, `##`, etc.) except at file start
- Include a blank line between headers and content
- Do not use emphasis (`**`) as a header
- Always specify a language in a fenced code block
- Include a blank line between lines that should be rendered on separate lines

And my settings.json:

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Edit|MultiEdit|Write",
        "hooks": [
          {
            "type": "command",
            "command": "npx markdownlint-cli --disable MD013 -- \"**/*.md\" || $(exit 2)"
          }
        ]
      }
    ]
  }
}

That little bit at the end of the command is important, because a post tool use hook must return an exit code of 2 to let Claude know that something needs tending (and markdownlint-cli doesn’t return that code by default when there’s a problem).

Hope these are helpful!