Author: Jud

Technologist interested in building both systems and organizations that are secure, scaleable, cost-effective, and most of all, good for humanity.
It Figures

It Figures

Since becoming a CTO, I’ve had a major uptick in spam, everything from thinly veiled LinkedIn connection requests to blatant direct emails. This one was particularly funny:

Oh Brian, you have a delightful sense of irony. Your products probably won’t solve my unwanted solicitation problem, but your marketing attempt did bring a smile to my face, and for that I thank you. Now go away.

For All The World To See

For All The World To See

Want an amazing example of learning in public, far more comprehensive than anything I’ve described here? Look no further than the blog of Juraj Majerik, where he chronicles in a 34-part series building an Uber clone from the ground up. No detail is spared in his journey, including setting up automated deployments and detailed monitoring dashboards. I’ve never seen a learning project this thoroughly developed or documented. It puts a number of real products I’ve worked on to shame. And his claim of it requiring about 300 hours is impressive as well, I would have guessed higher.

I tip my hat to you, Juraj. May your example inspire many more to embark on such educational adventures.

Eye Of The Beholder

Eye Of The Beholder

In My 20 Year Career Is Technical Debt, the author outlines how most everything he’s worked on is eventually deprecated and replaced. It’s a hazard of technical work, where new approaches, frameworks, and solutions are being invented regularly. We’re not building cathedrals that will still be standing five-hundred years from now. At least not in the sense that the stuff of our creation, the ones and zeros, are still going be on a storage medium and flowing through a CPU long into the future (with some rare exceptions).

But that’s not the point. What you build doesn’t matter, it’s all going to end up technical debt (if you’re lucky) or deleted (if you’re not). What matters is the outcomes enabled by what you’ve created, and the relationships you’ve developed along the way. If your code plays a part in improving people’s lives for some period of time, then it’s done its job. And if those around you are better for having collaborated on it, you’ve done yours.

That’s all we can ask for. And it’s enough.

Not Forgotten

Not Forgotten

Generally speaking, people want to know they’ve made a difference in the world that will outlast themselves. Few occupations have the intense immediacy of potentially giving one’s life for the future of human flourishing than that of the solider. It’s an admirable profession that is worthy of respect, especially for those for whom this potential became reality. However, not many are suited for such work, not to mention ideally the need for soldiers will shrink as societies mature.

Thankfully there are myriad other ways in which a person can approach a career with the future in mind. Last week I read through a number of articles from 80,000 hours, and I have the book of the same name queued up as well. The premise is that our jobs take up a considerable fraction of our lives, likely more than any other activity, and thus it behooves us to think deeply of how to spend that time. That’s hardly controversial, but applying logical principles and data-backed guidance to maximize the future impact can be (similar to discussions around Effective Altruism).

Personally, I find the arguments compelling. It’s why I’ll spend the rest of my time as a technologist dedicated to work in the public sector, and especially in the space of tech and politics. When it comes to maximizing human flourishing, good public policy is critical, and with a shrinking world, the risk of bad policy existential.

Many have died to give those who remain the chance to build this better world. Honored to be among the latter group; don’t intend to let it go to waste.

Wants What It Wants

Wants What It Wants

(I seem to open a lot of blog posts with variations on “I’ve written before about X.” Here’s another one).

Back in 2017 I wrote three posts in a row about the tension between giving users what they want and guiding them to what is best. They came to mind this week when I ran up against on of terraform‘s more annoying (lack of) features: the inability to apply changes on a per file basis. I absolutely understand why it’s not supported, but sometimes it’s a quick and dirty way to keep moving forward, and dang it, I needed it!

After a day of annoyingly overspecifying --target flags in my commands, I rolled up my sleeves and built a wrapping script that would do the work for me, essentially adding a --file flag to the CLI. I share it here as a service to the Infrastructure-as-Code community. Usage is easy:

terraform-files apply --file my-infra.tf

It’s hacky as heck, so use at your own risk, your mileage may vary, etc etc. And if you want to make it better, revisions welcomed!

A Matter of Perspective

A Matter of Perspective

In the past several months I’ve been making efforts to do more networking with technologists. One avenue to do that has been joining a couple Slack workspaces (Rands Leadership and All Tech Is Human, specifically). A few days back a conversation topic was the difference between unit tests and integration tests; a topic on which I definitely have opinions.

As part of the discussion I came up with the following distinction, which I liked enough to codify here for posterity:

  • Unit Test: Tests one “thing” (function, module, service, system) in isolation
  • Integration Test: Tests multiple “things” (functions, modules, services, systems) in combination

Inherent to this definition is some ambiguity, because a single “thing” at one level is multiple “things” at another level. What matters definitionally is the spirit of a test: is it trying to test one thing or multiple things. If the former, it’s a unit test. Otherwise it’s an integration test.

For what it’s worth, I’m a much bigger fan of the test diamond than the test pyramid. The ratio of “amount of stuff tested” to “effort required to write tests” is so much higher when writing integration tests. And they (typically) test at the “actual business functionality” level, vs at the “does this code do the thing” level. And value is all that matters.

On a tangential note, I developed another type of test diamond a couple years back. It was initially designed when evaluating taco shops along Poway Road in San Diego, but it’s applicable to just about anything you want to rate. I leave the interpretation of the diagram as an exercise for the reader (the ambiguity is a feature, not a bug).

Slow And Steady

Slow And Steady

As of today I have a backlog of 49 draft posts going back to 2017. A good chunk of this backlog, especially recently, centers around themes of maintenance, longevity, sustainability, and generally making sure that work outlasts oneself in some form or another. It’s obviously something on my mind.

So when I sit down to write, why do I most often start with something new instead of picking up an old draft? I suspect it has to do with a fresh take being more motivating. For today, that motivation was my completion of the La Jolla Half Marathon. It’s a race I’ve run before, though it’s been six years.

What’s the relevance to the theme that I mentioned earlier? It’s not only that I am still able to run it despite being in my mid-40s, but that I beat my prior time by over 10 minutes (today’s result was 1:56:05). Which means my years of running have created the kind of habits that enable successful long runs with little to no additional preparation (I only signed up for this race three weeks ago, and thus didn’t have much margin to do a ramp-up beyond my normal weekly miles).

Similarly, while I don’t code every single day (nor should I, it’s not the focus of my current role), I aim to do enough programming regularly so that I’m ready to do more if a situation calls for it.

There’s probably also something to learn here about the long-term benefits of keeping systems running that are doing their jobs as designed and continue to provide value, versus continually looking to replatform and rebuild.

In short: maintenance matters.

It’s Never Five Minutes

It’s Never Five Minutes

There’s no magic method to software estimation that produces perfect results. Nor is it a skill that’s easily taught. For the most part, you just have to start doing it, make a lot of mistakes, and gradually you’ll get better over time.

That being said, here are a couple articles on the topic worth reading:

My own two (or more) cents is that individual task estimates will always have considerable uncertainty, but that can be mitigated by quantity. As long as there’s no systemic bias, the sum of estimates across a set of tasks will become more accurate as the set of tasks itself grows. And any bias that might exist (a tendency to underestimate is most common) can be compensated for by a final multiplicative factor (1.25 is a useful starting point, but can be refined over time).

Further, a diversity of estimates per task also increases accuracy, and directly proportional to the diversity of perspectives brought to it. Have front-end folks contribute to back-end estimates and vice versa; get estimates from both senior and junior level engineers; include people that cut across multiple projects. If nothing else it will promote collaborative conversations.

Here’s a process I use to achieve the above diversifications. It’s most applicable when estimating a project, but could also be used for estimating large features, though it’d be overkill for a single task. It can be done either synchronously or (mostly) asynchronously, and doesn’t take more than a couple hours.

  1. Put together a team of three estimators, ideally with a variety of skills and experience relative to the project being scoped.
  2. Each person reads through whatever project materials have been assembled so far (descriptions of the work, details on the customer and industry, overall objectives, etc).
  3. On their own, each person independently writes down in a few sentences their understanding of the objective of the project. This description should be non-technical and describe business outcomes, not implementation details.
  4. The group comes together and synthesizes their separate descriptions into a unified paragraph they all agree accurately captures the project objectives.
  5. Each person independently puts together a list of tasks to be completed based on that description. These should include everything needed to get to done, including not only implementation but design time, testing, bug fixing, deployments, documentation, and so on.
  6. The group reassembles and synthesizes each list into a unified task list, removing duplicates as needed, and discussing any items that aren’t broadly understood, until everyone feels good that the list is as complete and detailed as possible.
  7. The scopers separate once again, and on their own each person comes up with an estimate for each task on the unified list. Scopers should not get hung up too long on each task; give it a few minutes thought, make a best effort (err on the high side), and capture any assumptions or unknowns.
  8. Once everyone has independently estimated, come together again, and compare estimates task by task (as well as any assumptions anyone captured for that task). If there’s little variance across the estimates on a task, use the average as a final value. If there is disagreement on the estimate, discuss until a common understanding is reached. Reword the task, or split it into multiple tasks if needed, and then estimate those subtasks. If a consensus final estimate cannot be reached after a reasonable amount of time, the largest original estimate for the task should be used.
  9. Sum the final estimates for each task to come up with a total estimate for the project. Each person gets a chance to share their feelings on this total: does it seem right in aggregate compared to the original description of work determined earlier? If so, excellent. If not, re-review the task list and iterate on estimates again, either as a group, or even going back separately.
  10. Collate the list of assumptions and unknowns and discuss. Based on what is identified, decide on a final uncertainty multiplier. As a general guidance, add 10% if the list is short and/or simple, 25% for “standard” assumptions, and up to 50% if there are large unknowns or significant anticipated complexities. There’s no hard rule here; apply professional judgment.
  11. Apply this multiplier to the task estimate sum to get a final estimate for the project. Share it with stakeholders along with the project description and list of assumptions. Be prepared to defend the value, but also be open to questions and challenges.

I haven’t met anyone who truly loves software estimation, but it’s critically important if you want to be successful beyond hobbyist level. Put in the effort as a team and reap the benefits.

Remix

Remix

I’m three weeks into the new job now, and while in many ways it’s exactly what I expected, there have been a few surprise challenges. However, what we’re facing isn’t new to me. Though the details are always different (history doesn’t repeat itself, despite the adage), and I’ll never feel totally up to the task, my nearly 25 years of professional technical work were excellent preparation.

It isn’t just the years of experience, though. I’ve intentionally pursued a variety of situations, and through a combination of hard work, the graciousness of colleagues and bosses, and some luck, I’ve been in the positions I’ve needed to develop professionally and grow my career. I’m thankful for that.

Walking the line between unchallenging safety and ineffective overreach is not easy, but I advise erring on the side of the latter. It’s true there’s no compression algorithm for experience, one has some control over the speed and variety at which experiences are… experienced. And that’s an encouraging thought.

I have no doubt I’m right where I need to be. Looking forward to Monday and the chance to move the needle.

Just No

Just No

Can we all agree that “drinking from a fire hose” is a terrible metaphor for the feeling of starting a new job? It’s overused, cliched, and kinda gross.

What I find most funny is that it’s usually stated as a humble brag about the amount of information you can ingest in short order, or to indicate that your new company is some kind of special unicorn doing work so incredibly complex that it overwhelms all who dare join it.

Reality is that the feeling of being overwhelmed in a new role is totally normal, even if the work is banal or the company is pedestrian. Sure, it takes time, but don’t make it sound harder than it is.