Month: May 2023

Not Forgotten

Not Forgotten

Generally speaking, people want to know they’ve made a difference in the world that will outlast themselves. Few occupations have the intense immediacy of potentially giving one’s life for the future of human flourishing than that of the solider. It’s an admirable profession that is worthy of respect, especially for those for whom this potential became reality. However, not many are suited for such work, not to mention ideally the need for soldiers will shrink as societies mature.

Thankfully there are myriad other ways in which a person can approach a career with the future in mind. Last week I read through a number of articles from 80,000 hours, and I have the book of the same name queued up as well. The premise is that our jobs take up a considerable fraction of our lives, likely more than any other activity, and thus it behooves us to think deeply of how to spend that time. That’s hardly controversial, but applying logical principles and data-backed guidance to maximize the future impact can be (similar to discussions around Effective Altruism).

Personally, I find the arguments compelling. It’s why I’ll spend the rest of my time as a technologist dedicated to work in the public sector, and especially in the space of tech and politics. When it comes to maximizing human flourishing, good public policy is critical, and with a shrinking world, the risk of bad policy existential.

Many have died to give those who remain the chance to build this better world. Honored to be among the latter group; don’t intend to let it go to waste.

Wants What It Wants

Wants What It Wants

(I seem to open a lot of blog posts with variations on “I’ve written before about X.” Here’s another one).

Back in 2017 I wrote three posts in a row about the tension between giving users what they want and guiding them to what is best. They came to mind this week when I ran up against on of terraform‘s more annoying (lack of) features: the inability to apply changes on a per file basis. I absolutely understand why it’s not supported, but sometimes it’s a quick and dirty way to keep moving forward, and dang it, I needed it!

After a day of annoyingly overspecifying --target flags in my commands, I rolled up my sleeves and built a wrapping script that would do the work for me, essentially adding a --file flag to the CLI. I share it here as a service to the Infrastructure-as-Code community. Usage is easy:

terraform-files apply --file my-infra.tf

It’s hacky as heck, so use at your own risk, your mileage may vary, etc etc. And if you want to make it better, revisions welcomed!

A Matter of Perspective

A Matter of Perspective

In the past several months I’ve been making efforts to do more networking with technologists. One avenue to do that has been joining a couple Slack workspaces (Rands Leadership and All Tech Is Human, specifically). A few days back a conversation topic was the difference between unit tests and integration tests; a topic on which I definitely have opinions.

As part of the discussion I came up with the following distinction, which I liked enough to codify here for posterity:

  • Unit Test: Tests one “thing” (function, module, service, system) in isolation
  • Integration Test: Tests multiple “things” (functions, modules, services, systems) in combination

Inherent to this definition is some ambiguity, because a single “thing” at one level is multiple “things” at another level. What matters definitionally is the spirit of a test: is it trying to test one thing or multiple things. If the former, it’s a unit test. Otherwise it’s an integration test.

For what it’s worth, I’m a much bigger fan of the test diamond than the test pyramid. The ratio of “amount of stuff tested” to “effort required to write tests” is so much higher when writing integration tests. And they (typically) test at the “actual business functionality” level, vs at the “does this code do the thing” level. And value is all that matters.

On a tangential note, I developed another type of test diamond a couple years back. It was initially designed when evaluating taco shops along Poway Road in San Diego, but it’s applicable to just about anything you want to rate. I leave the interpretation of the diagram as an exercise for the reader (the ambiguity is a feature, not a bug).

Slow And Steady

Slow And Steady

As of today I have a backlog of 49 draft posts going back to 2017. A good chunk of this backlog, especially recently, centers around themes of maintenance, longevity, sustainability, and generally making sure that work outlasts oneself in some form or another. It’s obviously something on my mind.

So when I sit down to write, why do I most often start with something new instead of picking up an old draft? I suspect it has to do with a fresh take being more motivating. For today, that motivation was my completion of the La Jolla Half Marathon. It’s a race I’ve run before, though it’s been six years.

What’s the relevance to the theme that I mentioned earlier? It’s not only that I am still able to run it despite being in my mid-40s, but that I beat my prior time by over 10 minutes (today’s result was 1:56:05). Which means my years of running have created the kind of habits that enable successful long runs with little to no additional preparation (I only signed up for this race three weeks ago, and thus didn’t have much margin to do a ramp-up beyond my normal weekly miles).

Similarly, while I don’t code every single day (nor should I, it’s not the focus of my current role), I aim to do enough programming regularly so that I’m ready to do more if a situation calls for it.

There’s probably also something to learn here about the long-term benefits of keeping systems running that are doing their jobs as designed and continue to provide value, versus continually looking to replatform and rebuild.

In short: maintenance matters.

It’s Never Five Minutes

It’s Never Five Minutes

There’s no magic method to software estimation that produces perfect results. Nor is it a skill that’s easily taught. For the most part, you just have to start doing it, make a lot of mistakes, and gradually you’ll get better over time.

That being said, here are a couple articles on the topic worth reading:

My own two (or more) cents is that individual task estimates will always have considerable uncertainty, but that can be mitigated by quantity. As long as there’s no systemic bias, the sum of estimates across a set of tasks will become more accurate as the set of tasks itself grows. And any bias that might exist (a tendency to underestimate is most common) can be compensated for by a final multiplicative factor (1.25 is a useful starting point, but can be refined over time).

Further, a diversity of estimates per task also increases accuracy, and directly proportional to the diversity of perspectives brought to it. Have front-end folks contribute to back-end estimates and vice versa; get estimates from both senior and junior level engineers; include people that cut across multiple projects. If nothing else it will promote collaborative conversations.

Here’s a process I use to achieve the above diversifications. It’s most applicable when estimating a project, but could also be used for estimating large features, though it’d be overkill for a single task. It can be done either synchronously or (mostly) asynchronously, and doesn’t take more than a couple hours.

  1. Put together a team of three estimators, ideally with a variety of skills and experience relative to the project being scoped.
  2. Each person reads through whatever project materials have been assembled so far (descriptions of the work, details on the customer and industry, overall objectives, etc).
  3. On their own, each person independently writes down in a few sentences their understanding of the objective of the project. This description should be non-technical and describe business outcomes, not implementation details.
  4. The group comes together and synthesizes their separate descriptions into a unified paragraph they all agree accurately captures the project objectives.
  5. Each person independently puts together a list of tasks to be completed based on that description. These should include everything needed to get to done, including not only implementation but design time, testing, bug fixing, deployments, documentation, and so on.
  6. The group reassembles and synthesizes each list into a unified task list, removing duplicates as needed, and discussing any items that aren’t broadly understood, until everyone feels good that the list is as complete and detailed as possible.
  7. The scopers separate once again, and on their own each person comes up with an estimate for each task on the unified list. Scopers should not get hung up too long on each task; give it a few minutes thought, make a best effort (err on the high side), and capture any assumptions or unknowns.
  8. Once everyone has independently estimated, come together again, and compare estimates task by task (as well as any assumptions anyone captured for that task). If there’s little variance across the estimates on a task, use the average as a final value. If there is disagreement on the estimate, discuss until a common understanding is reached. Reword the task, or split it into multiple tasks if needed, and then estimate those subtasks. If a consensus final estimate cannot be reached after a reasonable amount of time, the largest original estimate for the task should be used.
  9. Sum the final estimates for each task to come up with a total estimate for the project. Each person gets a chance to share their feelings on this total: does it seem right in aggregate compared to the original description of work determined earlier? If so, excellent. If not, re-review the task list and iterate on estimates again, either as a group, or even going back separately.
  10. Collate the list of assumptions and unknowns and discuss. Based on what is identified, decide on a final uncertainty multiplier. As a general guidance, add 10% if the list is short and/or simple, 25% for “standard” assumptions, and up to 50% if there are large unknowns or significant anticipated complexities. There’s no hard rule here; apply professional judgment.
  11. Apply this multiplier to the task estimate sum to get a final estimate for the project. Share it with stakeholders along with the project description and list of assumptions. Be prepared to defend the value, but also be open to questions and challenges.

I haven’t met anyone who truly loves software estimation, but it’s critically important if you want to be successful beyond hobbyist level. Put in the effort as a team and reap the benefits.

Remix

Remix

I’m three weeks into the new job now, and while in many ways it’s exactly what I expected, there have been a few surprise challenges. However, what we’re facing isn’t new to me. Though the details are always different (history doesn’t repeat itself, despite the adage), and I’ll never feel totally up to the task, my nearly 25 years of professional technical work were excellent preparation.

It isn’t just the years of experience, though. I’ve intentionally pursued a variety of situations, and through a combination of hard work, the graciousness of colleagues and bosses, and some luck, I’ve been in the positions I’ve needed to develop professionally and grow my career. I’m thankful for that.

Walking the line between unchallenging safety and ineffective overreach is not easy, but I advise erring on the side of the latter. It’s true there’s no compression algorithm for experience, one has some control over the speed and variety at which experiences are… experienced. And that’s an encouraging thought.

I have no doubt I’m right where I need to be. Looking forward to Monday and the chance to move the needle.

Just No

Just No

Can we all agree that “drinking from a fire hose” is a terrible metaphor for the feeling of starting a new job? It’s overused, cliched, and kinda gross.

What I find most funny is that it’s usually stated as a humble brag about the amount of information you can ingest in short order, or to indicate that your new company is some kind of special unicorn doing work so incredibly complex that it overwhelms all who dare join it.

Reality is that the feeling of being overwhelmed in a new role is totally normal, even if the work is banal or the company is pedestrian. Sure, it takes time, but don’t make it sound harder than it is.

Personal Assistant

Personal Assistant

For some reason unbeknownst to me, health insurance in the United States is tied to employment. So one of the joys of starting a new job is being forced to reevaluate a bunch of plans, choose new options, and in some cases, find new doctors. As it turns out, my previous physician retired last year, so in any case I need to find a new one.

This afternoon I called my preferred medical network to ask about doctors accepting new patients, and the agent informed me that due to a shortage of folks, there pretty much isn’t anyone for the next couple of months. Lovely. I asked about waiting lists or notifications I could sign up for; naturally that doesn’t exist either. The best he could suggest was to check the physician search page every couple of days.

Well, I’m lazy, so I reverse engineered the search API behind the website, wrote a Lambda function to query it periodically, and set up an alarm to email me when the search results come back non-empty. Take that, manual process.