Tag: Invent And Simplify

Can’t Stop Won’t Stop

Can’t Stop Won’t Stop

Yup, it’s another post about Claude (and I don’t think it’ll be my last).

Yesterday I had a need to share an extensive discussion I had in the Claude desktop app with a person who isn’t (yet) an AI user. I didn’t want to share the conversation with a public link (even if an obscure one). So I built a tool that takes a Claude export archive and turns it into a navigable website. From there I simply printed the relevant conversation to PDF and sent that via email. Easy peasy.

And by “I built” I obviously mean I asked Claude Code to do it. Took less than an hour of wall clock time, only about 10 minutes of it requiring my active attention. The result is now on GitHub for your enjoyment. Amount of code I hand-wrote: zero. That includes the documentation.

There were other solutions out there, but I thought it’d take no more time just to build a tool to my exact specifications. And I was right. That was a revelation. The times they are a changin’, fellow software nerds. What I first mentioned as a far off possibility back in 2017 seems now a reality.

Swords Are No More Use Here

Swords Are No More Use Here

I’ve been spending an awful lot of time with Claude Code as of late (including passing the Claude Code in Action course, because I do love badges).

The lingua franca of coding agents seems to be Markdown, which is totally cool, I’m a big fan. But in my experience to date (which involves a bunch of Spec Kit), Claude models don’t write syntactically correct Markdown every time. Admittedly I haven’t tried other models, and Opus 4.6 just came out yesterday so maybe things will improve, but for now there seem to be a couple consistent problems:

  • Emphasis is used in place of properly hierarchical headers (violates MD036)
  • Lists are not proceeded by a line break (which will cause the list items to run together when rendered)
  • Items that should be on multiple lines do not have an extra line break between them (again, causing them to render as a single line)

I got tired of these issues, and thus created a simple set of instructions to tell Claude not to do the above, and a hook to automatically lint all markdown files and tell the model to fix any issues that slip through. Dropped these into my global configuration, and so far, so good!

Here’s my CLAUDE.md:

## Markdown Formatting

When generating or editing markdown files, always follow these rules for proper rendering:

- Use `-` for unordered lists, `1.` for ordered lists (not `*` or other markers)
- Include a blank line before bulleted (`-`) or numbered (`1.`) lists
- Include a blank line before fenced code blocks (```)
- Include a blank line before headers (`#`, `##`, etc.) except at file start
- Include a blank line between headers and content
- Do not use emphasis (`**`) as a header
- Always specify a language in a fenced code block
- Include a blank line between lines that should be rendered on separate lines

And my settings.json:

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Edit|MultiEdit|Write",
        "hooks": [
          {
            "type": "command",
            "command": "npx markdownlint-cli --disable MD013 -- \"**/*.md\" || $(exit 2)"
          }
        ]
      }
    ]
  }
}

That little bit at the end of the command is important, because a post tool use hook must return an exit code of 2 to let Claude know that something needs tending (and markdownlint-cli doesn’t return that code by default when there’s a problem).

Hope these are helpful!

Spellcraft: The Practice of AI Creativity

Spellcraft: The Practice of AI Creativity

The first essay in this series, Light From Light, offered a theoretical framework: AI as sub-creator, bearing the image of humanity, generating in response to human vision. The second, By Their Fruits, mapped that framework onto practical approaches, each defined by a creative identity you might adopt: The Author, The Muse, The Artisan, The Debater, The Creator, The Curator.

But knowing which approach to choose isn’t the same as knowing how to execute it. This essay is about the craft of actually doing the work. Not theory, but practice. Not frameworks, but techniques.

Think of it as spellcraft: the particular incantations, gestures, and preparations that make the magic work.

Foundations Across All Approaches

Before diving into specific approaches, some principles apply universally.

Start with clear intent. Before you open any AI interface, know what you’re trying to accomplish in this session. Not the whole project, just this sitting. “I want to draft the opening scene” is better than “I want to work on my novel.” Vague intent produces vague results.

Set the frame early. The first messages in any conversation shape everything that follows. If you want the AI to behave a certain way, such as critical, generative, or adversarial, establish that at the outset. Changing modes mid-conversation is possible but harder.

Treat outputs as raw material. Even in approaches where AI generates extensively, never treat what emerges as finished. It’s ore, not refined metal. Your job is smelting, shaping, polishing.

Know when to start fresh. Long conversations accumulate context that can be helpful (the AI “remembers” your characters) but also constraining (the AI gets stuck in patterns). When things feel stale or repetitive, begin a new conversation and re-establish only what you need.

Match the model to the task. Simpler, faster models work well for brainstorming, quick feedback, and high-volume generation where you’re going to select ruthlessly anyway. More capable, slower models earn their cost for nuanced critique, complex narrative logic, and work requiring subtlety. Use the lighter tool when it suffices.

The Author

You are the sole generator. AI serves only as critic, never creating content that might end up in your work.

The core instruction. Your system prompt or opening message must be explicit and firm. Something like: “You are an editor and critic. You will never write content for me: no sample sentences, no suggested phrasings, no ‘here’s how you might put it.’ Your job is to identify problems and explain why they’re problems. I will do all the writing.”

Many AI systems are trained to be helpful through generation. You’re asking for the opposite, and you need to be insistent. If the AI slips and offers rewrites, redirect: “I asked you not to write for me. Just tell me what’s wrong and why.”

What to ask for. Request specific kinds of critique:

  • “Read this scene and identify where the pacing drags.”
  • “What are my three worst habits in this draft?”
  • “Where does the dialogue feel unnatural, and why?”
  • “What’s the weakest paragraph and what makes it weak?”

Avoid asking “Is this good?” or “What do you think?” These invite vague praise or unhelpfully broad criticism. Specific questions yield specific answers.

Working with feedback. When the AI identifies a problem, resist asking for solutions. Instead, ask clarifying questions: “Why does that section drag?” or “What would tightening look like in principle?” The goal is understanding the problem deeply enough to solve it yourself.

The temptation to resist. You will be tempted to ask for “just one example” of how to fix something. This is the crack through which pure authorship leaks away. If you’ve committed to this approach, hold the line. The struggle is the point.

The Muse

You are the sole source. AI is pure instrument, channeling your vision without contribution.

Maximum constraint. Your instructions should leave no room for AI interpretation: “Write exactly what I describe, in the style I specify, adding nothing.” This is the most constrained use of AI generation, not because you’re not generating, but because every element of what’s generated is dictated by you.

Dictation-level specificity. Your prompts must be detailed enough that a competent typist could produce roughly the same result: “Write a paragraph describing John entering the room. He moves slowly, tired from the journey. He notices the letter on the table but doesn’t pick it up yet. The tone is quiet dread. Use short sentences. No metaphors.”

This is demanding. You’re essentially pre-writing the content mentally and using AI to transcribe and polish.

Where this makes sense. The Muse approach works best when you have a clear vision and want execution at speed, producing content faster than you could type. It’s common in professional contexts where the creative decisions were made in planning and what’s needed is efficient production.

The slop risk. This approach, done lazily, produces generic content. If you don’t dictate with precision, the AI fills gaps with defaults, and defaults are what everyone else’s defaults are too. The Muse approach demands more from you, not less. Your vision must be detailed enough to fully specify the output.

The Artisan

AI provides structure. You craft the surface.

Getting useful scaffolds. Ask for architecture, not prose: “Outline a three-act structure for a story about [premise].” Or: “Break this chapter into scenes and describe the function of each.” Or: “What are the key beats a confrontation scene needs to hit?”

Keep the AI at the level of structure: scenes, beats, functions, sequences. When it starts offering prose, redirect: “Just the structure. I’ll handle the writing.”

Interrogating the scaffold. Don’t accept the first structure offered. Push: “Why does the confrontation need to come before the revelation? What if we reversed them?” Use the AI to explore structural options the way The Curator explores generative options.

Translating structure to prose. With your scaffold in hand, write. The AI has told you what needs to happen; your job is making it happen in language that’s yours. This is where your craft lives.

The structural debt. A risk of this approach: if the AI provides your structure, is the finished work really yours? For some writers this is fine; they consider prose the real creative work. For others it nags. Know your own conscience here.

The Debater

AI provides opposition. You sharpen your vision through friction.

Prompting for resistance. Explicitly request disagreement: “I’m planning to end this story with the protagonist forgiving her father. Argue against that choice. Make the strongest case you can for a different ending.” Or: “I think this scene works. Tell me everything that’s wrong with it. Be harsh.”

Most AI systems are trained toward agreement. You need to actively override this. Words like “argue against,” “challenge,” “push back,” “tell me why I’m wrong” help.

Steelmanning alternatives. Ask the AI to make the best case for options you’ve rejected: “I decided not to include a romantic subplot. Steelman the case for including one.” This isn’t about changing your mind (though you might). It’s about being confident you’ve considered the alternatives seriously.

The value of articulating defense. When the AI challenges you, don’t just dismiss—respond. Write out why you’re making the choice you’re making. The act of articulating your defense often clarifies your thinking, even if the AI’s objection was weak.

Knowing when to yield. Sometimes the adversary is right. Part of the discipline is recognizing when a challenge has landed, when your defense feels hollow, when you’re holding a position out of stubbornness rather than conviction. The Debater approach only works if you’re genuinely open to being persuaded.

The Creator

You provide vision and direction. AI sub-creates in response, generating content you then shape.

Establishing the relationship. Make your role as governing intelligence clear from the start: “We’re developing a story together. I’ll provide direction and make all final decisions. Your job is to generate options based on my vision, which I’ll then accept, reject, or redirect.”

Directing, not dictating. The art of this approach is in how you prompt. Too specific (“Write a scene where John enters the room, sees the letter on the table, picks it up with trembling hands…”) and you’re essentially dictating. You might as well write it yourself. Too vague (“Write the next scene”) and you lose creative control.

Find the middle register: “Write a scene where John discovers the letter. The emotional beat should be dread, not surprise, because he’s been expecting bad news. Keep it under 500 words.” This gives the AI room to generate while keeping your vision in control.

The shaping loop. Expect to work in cycles:

  1. You direct
  2. AI generates
  3. You evaluate: What works? What doesn’t?
  4. You redirect with specifics: “Keep the opening paragraph, but the dialogue feels too on-the-nose. Make it more oblique.”
  5. Repeat until satisfied

This is dialogue, not dictation. Each round should refine toward your vision.

Maintaining coherence. Longer projects risk the AI forgetting or contradicting earlier material. Periodically re-anchor: “Remember, Sarah’s defining trait is her reluctance to ask for help. Make sure that comes through in this scene.” For complex projects, consider maintaining a reference document you paste in at key moments.

Model considerations. More capable models handle this approach better because sub-creation requires understanding nuance, maintaining consistency, and generating text with genuine craft. Use faster models for initial brainstorming, slower ones when you’re working on material that matters.

The Curator

AI produces abundance. You select and arrange.

Prompting for volume. Your goal is generating many options quickly. Configure for higher randomness if possible. You want variety, not consistency. Prompt for explicit multiplicity: “Give me ten different opening lines for this chapter, ranging from quiet to dramatic.” Or: “Generate five different ways this confrontation could end, each with different emotional implications.”

Selection as craft. Your creative act is judgment. Develop criteria: What makes one option better than another for your purposes? Don’t just pick what sounds good. Articulate why it works. This clarity will improve your selections over time and teach you about your own taste.

Combining and recombining. Often the best result comes from synthesis: the opening of option three, the turn from option seven, a detail from option one. Curation isn’t just picking; it’s collage.

The danger of abundance. Endless options can become paralyzing. Set limits: “I’ll generate twenty options and pick from those.” Avoid the infinite scroll of “what if I generate just a few more.” At some point you have to choose.

When to curate and when to shape. Pure curation means taking what you pick and using it as-is. But most curators find themselves slipping into light shaping, adjusting a word here, smoothing a transition there. That’s fine. The approaches aren’t airtight. Know when you’ve shifted and whether that shift serves you.

Cross-Cutting Craft

Some considerations span all approaches.

Temperature and randomness. When you want variety and surprise, such as brainstorming, generating options, and early exploration, lean toward higher randomness. When you want consistency and precision, such as polishing, maintaining voice, and final passes, lean lower. Think of it as the difference between jazz improvisation and classical execution.

Context and memory. AI holds context within a conversation but not across conversations (unless using memory features). For ongoing projects, you’ll need to re-establish key information each session. Maintain a reference document with character details, plot points, and stylistic notes you can paste in when needed.

Revision passes. All approaches benefit from multiple passes with different frames. Write first, then switch to The Author mode for critique. Generate with The Creator, then curate the results. Layer the approaches as needed.

When to step away. AI is always available, but you aren’t always at your best. Fatigue leads to accepting weaker outputs, vague prompts, and abandoned discipline. Know when to close the laptop and return fresh.

The Spell’s Completion

These techniques are spellcraft: the practical knowledge that makes creative magic work. But spellcraft alone doesn’t make a wizard. The craft serves the vision, not the other way around. And the vision serves enchantment. The test Tolkien identified still holds: does the Secondary World produce belief? All this technique, in the end, is in service of that spell.

Know what you’re making and why. Know who you want to be as a maker. Then let the techniques serve those answers.

The theory of Light From Light explains the relationship. The approaches in By Their Fruits define your role within it. And the craft here, the particular prompts and practices, brings it into reality.

Light from light, choice from choice, word from word. Now go make something.


This is the third essay in a series on AI and creativity. The first, Light From Light, examined theoretical frameworks. The second, By Their Fruits, mapped approaches to creative identity. This essay explored the practical craft of execution.

When Everyone Is Super

When Everyone Is Super

By name, at least, I’ve now worked at six different vendors of government solutions. There’s a fundamental tension that arises when building for state governments especially, that I’ve seen over and over again:

  • On one hand, vendors want to build products that can be deployed repeatedly across states for cost-effectiveness at scale and rapid per-project implementation
  • On the other hand, states have wildly-divergent policy landscapes and political realities, even in seemingly similar domains, demanding highly customized solutions

This tension creates numerous challenges. First, how should the system be architected to support configurability in the first place? It adds cost and risk to do so. And then, how should vendors approach communication of configurable features to a paying customer who doesn’t need the options? If you’re collaborating closely during development (as you should) it’s going to come up in planning and status meetings.

A case I’ve made that usually resonates is that having configurable options enables us as a vendor to maintain a (mostly) common codebase across customers. And that means when an improvement is made for any customer, everyone benefits. More succinctly: forks are bad. I can tell at least one tale of a high profile private customer that initially insisted on having their own radically customized copy of our company’s core product line, only to regret it a few years later when it took months to back port newer features to it.

Here’s a few considerations for product and engineering folks to consider when developing a solution for scale through repeated implementations:

  • First Project: have scalability in the back of your mind, but don’t fall prey to YAGNI and overbuilding otherwise you’ll price yourself out of your first customer; just do basic foundational configurability and focus primarily on your immediate requirements
  • Second Project: don’t make the mistake of thinking you can discount your pricing, you’ve yet to hit economy of scale, and you’ll need any budget saved from reuse to expand your configurability capabilities and begin thinking long-term scaling strategy
  • Third Project: this all-important moment is where you can now truly begin thinking about productization, having full configurability (going beyond mere look and feel to business logic) and rapid, repeatable deployments
  • Fourth Project: now you should be reaping the efficiency benefits of your configurability and repeatability; if you haven’t yet, act fast and make investments at speed, or it’ll be too late

Finally, an anti-pattern:

if customer == 'Customer 1':
    doAThing()
elif customer == 'Customer 2':
    doADifferentThing()
elif customer == 'Customer 3':
    doYetAnotherThing()

The above might be fine for your first couple projects, but if it’s still in your code by project 3 or 4, you’re doomed.

Covering the Bases

Covering the Bases

Dropdowns in web forms are generally good; they make it simpler for users to input options correctly and ensure back-end data integrity. They can be limiting at times, though, so I have to respect an event registration website I used yesterday that tried to be all-encompassing in the selections for “Title”:

Henceforth I expect to be addressed as “Lord Neer”

I have to imagine this list came from some out-of-the-box form generation tool. Or created by GenAI, perhaps? I’m curious what it could have been. And was it not modifiable? Suffice it to say several of the choices are fairly pretentious given the event I was buying tickets for, so I feel like maybe the developer should have looked into culling the list.

Headquarters (Part 3)

Headquarters (Part 3)

Just when you thought there couldn’t be more (oh trust me, there’s more), here’s comes another round of my series on computer setups (earlier posts are here and here).

Years: 2005-2010
Machine: The box in the closet from my last post but with a snazzy new LCD monitor (my first flat panel), wireless peripherals, and my wife’s great-grandmother’s writing desk.
What I was doing: writing daily on Xanga; applying to family camp; traveling to Tennessee to watch Revenge of the Sith with a high school friend; hosting LAN parties for Age of Empires III; traveling a lot for work.

Years: 2006-2007
Machines: A plethora of cast-off parts coalesced into a couple functional boxes in the garage.
What I was doing: running a NAS for storing all my media; trying to get Gentoo Linux to compile; realizing that running a bare web server on the Internet is asking for trouble.

Years: 2008-2010
Machines: A beefed up HTPC rig in a rackmount case with a whole bunch of amps to power my Magnepan speaker system, plus the plethora of beige boxes from before, but better arranged.
What I was doing: shivering in the garage when using this desk in the winter, listening to Comfortably Numb at peak volume, hosting movie-watching parties.

Year: 2011
Machines: Same rack mounted setup, but relocated from our old garage in Ohio to a closet in our new garage in San Diego, plus a random beige box (those just won’t go away).
What I was doing: Playing with some audio recording gear; streaming Game of Thrones; stocking up on printer paper, apparently.

Years: 2012-2014
Machines: Moved inside, and rebuilt the guts from the rack mount case into a traditional one (albeit black and silenced), also got my first Mac laptop (from which I’ve never looked back).
What I was doing: writing election software; creating hand-made guitar effect pedals (that were terrible); developing back problems thanks to a crummy chair.

Figuratively Speaking

Figuratively Speaking

Speaking effectively to non-technical people can be a challenge for technical folks, but it’s an essential task for all but the most mundane (read: least-effective) of roles. One mechanism that I’ve found helpful is the use of metaphor. I’m a huge fan of trying to describe complex topics by mapping them to more broadly understood concepts. Being able to come up with such mappings fluently is a powerful skill. There may be many ways to develop it, but I suspect one is cultivating a wide set of interests.

While I was writing Tuesday’s post, it occurred to me that today’s Generative AI tools are to software what today’s 3D printers are to physical objects. On one hand, it’s incredible to be able to provide a specification and have it manifested in near real-time. Printers can make a variety of solids: toys, some kinds of replacement parts, that sort of thing. GenAI can create chunks of useful code, quick user interfaces, and basic apps, like my Pinochle scoresheet. But there are limits. Can either of these tools produce high tolerance, precision parts / highly secure, performant code? Can they build complex solutions like electronics / web browsers?

A 3D printer creating a figurine

I could be wrong, but just like we’re a long ways from 3D printing an iPhone, we seem a ways away from vibe coding Microsoft Word or an entire government system of record.

Old Dog, New Tricks

Old Dog, New Tricks

Over two years ago I bought a few domains with the intent of building a tool for keeping track of card game scores. Like many of the best laid plans, I didn’t get around to doing so. Until now.

With the advent of GenAI and “vibe coding” I figured there was no longer any excuse. I spun up Lovable and started prompting. The results? Not bad. Not bad at all. With maybe a dozen prompts and half an hour, you can see the results at onlinescoresheet.net. What was most impressive for me is that I was able to simply ask the model to do Pinochle scoring, and it was able to understand what that meant and implement it without me explaining the rules.

What’s up next? I’d like to generalize the scoring system to be configurable, or at the least add explicit support for a few more game types. I’d also like to dig into the source code to evaluate quality. Should be fun!