Tag: Are Right A Lot

Not Forgotten

Not Forgotten

Generally speaking, people want to know they’ve made a difference in the world that will outlast themselves. Few occupations have the intense immediacy of potentially giving one’s life for the future of human flourishing than that of the solider. It’s an admirable profession that is worthy of respect, especially for those for whom this potential became reality. However, not many are suited for such work, not to mention ideally the need for soldiers will shrink as societies mature.

Thankfully there are myriad other ways in which a person can approach a career with the future in mind. Last week I read through a number of articles from 80,000 hours, and I have the book of the same name queued up as well. The premise is that our jobs take up a considerable fraction of our lives, likely more than any other activity, and thus it behooves us to think deeply of how to spend that time. That’s hardly controversial, but applying logical principles and data-backed guidance to maximize the future impact can be (similar to discussions around Effective Altruism).

Personally, I find the arguments compelling. It’s why I’ll spend the rest of my time as a technologist dedicated to work in the public sector, and especially in the space of tech and politics. When it comes to maximizing human flourishing, good public policy is critical, and with a shrinking world, the risk of bad policy existential.

Many have died to give those who remain the chance to build this better world. Honored to be among the latter group; don’t intend to let it go to waste.

Wants What It Wants

Wants What It Wants

(I seem to open a lot of blog posts with variations on “I’ve written before about X.” Here’s another one).

Back in 2017 I wrote three posts in a row about the tension between giving users what they want and guiding them to what is best. They came to mind this week when I ran up against on of terraform‘s more annoying (lack of) features: the inability to apply changes on a per file basis. I absolutely understand why it’s not supported, but sometimes it’s a quick and dirty way to keep moving forward, and dang it, I needed it!

After a day of annoyingly overspecifying --target flags in my commands, I rolled up my sleeves and built a wrapping script that would do the work for me, essentially adding a --file flag to the CLI. I share it here as a service to the Infrastructure-as-Code community. Usage is easy:

terraform-files apply --file my-infra.tf

It’s hacky as heck, so use at your own risk, your mileage may vary, etc etc. And if you want to make it better, revisions welcomed!

It’s Never Five Minutes

It’s Never Five Minutes

There’s no magic method to software estimation that produces perfect results. Nor is it a skill that’s easily taught. For the most part, you just have to start doing it, make a lot of mistakes, and gradually you’ll get better over time.

That being said, here are a couple articles on the topic worth reading:

My own two (or more) cents is that individual task estimates will always have considerable uncertainty, but that can be mitigated by quantity. As long as there’s no systemic bias, the sum of estimates across a set of tasks will become more accurate as the set of tasks itself grows. And any bias that might exist (a tendency to underestimate is most common) can be compensated for by a final multiplicative factor (1.25 is a useful starting point, but can be refined over time).

Further, a diversity of estimates per task also increases accuracy, and directly proportional to the diversity of perspectives brought to it. Have front-end folks contribute to back-end estimates and vice versa; get estimates from both senior and junior level engineers; include people that cut across multiple projects. If nothing else it will promote collaborative conversations.

Here’s a process I use to achieve the above diversifications. It’s most applicable when estimating a project, but could also be used for estimating large features, though it’d be overkill for a single task. It can be done either synchronously or (mostly) asynchronously, and doesn’t take more than a couple hours.

  1. Put together a team of three estimators, ideally with a variety of skills and experience relative to the project being scoped.
  2. Each person reads through whatever project materials have been assembled so far (descriptions of the work, details on the customer and industry, overall objectives, etc).
  3. On their own, each person independently writes down in a few sentences their understanding of the objective of the project. This description should be non-technical and describe business outcomes, not implementation details.
  4. The group comes together and synthesizes their separate descriptions into a unified paragraph they all agree accurately captures the project objectives.
  5. Each person independently puts together a list of tasks to be completed based on that description. These should include everything needed to get to done, including not only implementation but design time, testing, bug fixing, deployments, documentation, and so on.
  6. The group reassembles and synthesizes each list into a unified task list, removing duplicates as needed, and discussing any items that aren’t broadly understood, until everyone feels good that the list is as complete and detailed as possible.
  7. The scopers separate once again, and on their own each person comes up with an estimate for each task on the unified list. Scopers should not get hung up too long on each task; give it a few minutes thought, make a best effort (err on the high side), and capture any assumptions or unknowns.
  8. Once everyone has independently estimated, come together again, and compare estimates task by task (as well as any assumptions anyone captured for that task). If there’s little variance across the estimates on a task, use the average as a final value. If there is disagreement on the estimate, discuss until a common understanding is reached. Reword the task, or split it into multiple tasks if needed, and then estimate those subtasks. If a consensus final estimate cannot be reached after a reasonable amount of time, the largest original estimate for the task should be used.
  9. Sum the final estimates for each task to come up with a total estimate for the project. Each person gets a chance to share their feelings on this total: does it seem right in aggregate compared to the original description of work determined earlier? If so, excellent. If not, re-review the task list and iterate on estimates again, either as a group, or even going back separately.
  10. Collate the list of assumptions and unknowns and discuss. Based on what is identified, decide on a final uncertainty multiplier. As a general guidance, add 10% if the list is short and/or simple, 25% for “standard” assumptions, and up to 50% if there are large unknowns or significant anticipated complexities. There’s no hard rule here; apply professional judgment.
  11. Apply this multiplier to the task estimate sum to get a final estimate for the project. Share it with stakeholders along with the project description and list of assumptions. Be prepared to defend the value, but also be open to questions and challenges.

I haven’t met anyone who truly loves software estimation, but it’s critically important if you want to be successful beyond hobbyist level. Put in the effort as a team and reap the benefits.

Remix

Remix

I’m three weeks into the new job now, and while in many ways it’s exactly what I expected, there have been a few surprise challenges. However, what we’re facing isn’t new to me. Though the details are always different (history doesn’t repeat itself, despite the adage), and I’ll never feel totally up to the task, my nearly 25 years of professional technical work were excellent preparation.

It isn’t just the years of experience, though. I’ve intentionally pursued a variety of situations, and through a combination of hard work, the graciousness of colleagues and bosses, and some luck, I’ve been in the positions I’ve needed to develop professionally and grow my career. I’m thankful for that.

Walking the line between unchallenging safety and ineffective overreach is not easy, but I advise erring on the side of the latter. It’s true there’s no compression algorithm for experience, one has some control over the speed and variety at which experiences are… experienced. And that’s an encouraging thought.

I have no doubt I’m right where I need to be. Looking forward to Monday and the chance to move the needle.

Don’t Repeat Yourself

Don’t Repeat Yourself

Technologists are generally pretty bad at understanding their own history. Understandable, given how quickly the industry moves, but still regrettable.

If you’ve ever written a line of JavaScript or called an API that returned JSON (and who hasn’t, they’re about as ubiquitous as tech can get), you should get to know Douglas Crockford. This interview with him covers a wide range of topics, including how JSON came to be (spoiler: as the antidote to XML), his perspective on JavaScript as a whole (and how it changed from his first impression), what it was like to work through the dot com bubble, and much more.

I also suppose I should study my own history better, because I’ve written on the topic of studying history several times before. Though not all forms of repetition are bad, right? Right?

Security Sunday

Security Sunday

I’ve been a daily user of YubiKeys since 2018. These little devices pack a hefty security punch with a number of useful features, including universal second factor (U2F), time-based one-time passwords (TOTP), static passwords, and personal identity verification (PIV).

This article contains an excellent overview of all the functions and how to use them. If you’re at all interesting in beefing up your security posture, I can’t recommend it highly enough.

Sean Maguire Was Right

Sean Maguire Was Right

Don Norman came to mind today, a not uncommon occurrance. I’ve mentioned him before, but not loudly enough. If you care at all about building great solutions, technical or otherwise, reading The Design of Everyday Things is a must. Here’s my favorite passage:

The idea that a person is at fault when something goes wrong is deeply entrenched in society. That’s why we blame others and even ourselves. Unfortunately, the idea that a person is at fault is imbedded in the legal system. When major accidents occur, official courts of inquiry are set up to assess the blame. More and more often the blame is attributed to “human error.” The person involved can be fined, punished, or fired. Maybe training procedures are revised. The law rests comfortably. But in my experience, human error usually is a result of poor design: it should be called system error. Humans err continually; it is an intrinsic part of our nature. System design should take this into account. Pinning the blame on the person may be a comfortable way to proceed, but why was the system ever designed so that a single act by a single person could cause calamity? Worse, blaming the person without fixing the root, underlying cause does not fix the problem: the same error is likely to be repeated by someone else.

The notion of system error is quite profound, applicable to technology, organizations, governments, even entire civilizations. Leaders of all stripes do well to consider its explanatory power.

Just discovered that Don has a new book coming next month: Design For a Better World. Pre-ordered!

No Easy Answers

No Easy Answers

I’ve been managing technical people for a while now, but when it comes to asking good questions and listening well, I’m always learning. One thing I’ve discovered is that questions needn’t be complex to be effective. Here’s three I use regularly:

How do you feel about that?

Giving someone space to express their emotions is usually a good place to start when beginning a conversation. This is doubly true in the workplace, where there’s a misperception that feelings have no place. But we’re all human, and our effectiveness is predicated on aligning our emotions to the task at hand.

What could you do about that?

Once a person feels safe describing how they feel about a situation, it’s time to explore options for how to move forward. The word could here is critical, it’s a word about possibilities. Usually with just a little nudge, people will be able to come up with a variety of potential solutions on their own.

What do you want to do about that?

Too often people are asked to consider all sorts of factors when weighing options, but never their own desires. And especially not just surface desires, but what they truly want based on their own complex (and often competing and contradictory) web of values. It’s a powerful question; simple to ask, but hard to answer truthfully. Though once it is, I’ve found one often has all the data at hand to make a high quality decision.

Irreduceable Complexity

Irreduceable Complexity

“Make everything as simple as possible, but not simpler.” – Albert Einstein

In any technical discussion, beware when someone says something is simple. Things are rarely as simple as their marketing materials claim, and there is a vast gulf between a quickly-constructed proof of concept and a production-ready solution.

Even concepts that might seem straightforward at first glance have considerable potential for edge cases and other gotchas. Consider the following as canonical examples:

A corollary to the above: there is no such thing as a “5 minute task” in technology. When you hear such a claim, mentally multiply by 10.

Jud Flow

Jud Flow

Keeping a nice and tidy code repository makes me happy. Here’s the typical process I use to avoid messes:

  1. Create my-sweet-new-feature branch from main
  2. Make some awesome code edits, then commit them
  3. Make some slightly less awesome edits, commit them also
  4. Run some tests, nothing works; debug and commit the fix
  5. Decide my originally awesome code isn’t so awesome; rewrite the whole feature and commit
  6. Tests pass locally, yay! Push my-sweet-new-feature to Github and create a pull request
  7. Hrm, tests fail in pipeline; whoops, forgot a file; commit and repush
  8. Okay, tests pass now, so message team for review
  9. That’s a reasonable request, change made and committed
  10. Fine, we’ll use your naming convention; change made and committed
  11. Ugh, made a spelling error; committed
  12. Fixed moar typos
  13. Uggggggh, one last commit to fix spacing; pushed to Github
  14. Wha? Tests failing? :facepalm: forgot a file again
  15. Commit that file and push one last time for realz
  16. Tests pass, team reviews and approves, we’re good to go
  17. Pull an update of the main branch
  18. Come back to my-sweet-new-feature and rewrite all the commits on top of updated main; group edits into a clean subset of logical changes, one per commit, that makes it look like I wrote the code perfectly the first time, with nicely crafted commit messages that will mean something to some poor future developer that has to maintain my code ten years from now
  19. Run tests to ensure everything still passes; it does
  20. Force push to the pull request, obliterating all those ugly prior commits with these nice clean ones
  21. Try to merge the branch to main with fast-forward only option, forget that Github doesn’t support it (curse you Github! even flipping CodeCommit supports fast-forward only merging)
  22. Fast-forward merge my-sweet-new-feature to main locally so my final commit signatures are preserved
  23. Try to push main to Github, but it fails because the branch is protected
  24. Unprotect main temporarily, repush
  25. Dang it, someone merged new changes since step 17; repull main and rebase my branch
  26. Re-push my final changes and re-protect main

See how easy that is? No excuses moving forward, my friends!