Tag: Are Right A Lot

School’s In Session

School’s In Session

Tonight I kick off a class from Stanford called Ethics, Technology, and Public Policy for Practitioners. It’s been a hot minute since I’ve been involved with formal education (about 10 years actually), but I’m pretty excited. Not just for the learning, but for the people I’ll meet along the way, who appear to be a fantastically variegated bunch based on what I’ve seen on Slack so far.

Here’s the course description from the syllabus:

Our goal is to explore the ethical and social impacts of technological innovation. We will integrate perspectives from computer science, philosophy, and social science to provide learning experiences that robustly and holistically examine the impact of technology on humans and societies.

Basically it’s Jud catnip. If it sounds interesting to you, I think it’s offered periodically. Here’s a link for future reference.

Withertos And Whyfors

Withertos And Whyfors

If I’ve said it once, I’ve said it a thousand times: there’s more to being a software engineer than coding. In fact, coding isn’t even the hardest part.

The point of that latter article is that AI won’t replace programmers any time soon, but not because it can’t code. Rather because it needs to know what it’s coding for, and specifying that well is what matters, whether it be a carefully constructed prompt to GPT or a detailed requirements document.

One of my favorite sayings is “It’s only software!” And I mean it, in that with enough time and money, computers can do just about anything (which is itself pretty darn cool). But no amount of software can determine what ought to be built. To do that we must apply a broader set of tools.

Truth At The Intersection

Truth At The Intersection

Earlier this year I pledged to read 32 out of my 44 books by authors who are either non-white or non-male, 73% of my total. Juneteenth seems like an excellent day to see how I’m doing, given both its significance to my objective, as well as it being near the middle of the year.

As of today I’ve completed 24 books, ahead of my required pace of 22 by this date. Of those, 4 were written by white women, 2 by non-white women, and 10 by non-white men. That’s 16 in total, or 67%, which means I need to pick up my pace a bit to hit my goal. Of my current 3 books in flight, 2 are by women and 1 was written by a consortium of indigenous folks, so that’ll help things out. And I’ve plenty more qualifying books in my queue.

If you’re curious, you can see what I’m reading any time on my Goodreads page.

Not Forgotten

Not Forgotten

Generally speaking, people want to know they’ve made a difference in the world that will outlast themselves. Few occupations have the intense immediacy of potentially giving one’s life for the future of human flourishing than that of the solider. It’s an admirable profession that is worthy of respect, especially for those for whom this potential became reality. However, not many are suited for such work, not to mention ideally the need for soldiers will shrink as societies mature.

Thankfully there are myriad other ways in which a person can approach a career with the future in mind. Last week I read through a number of articles from 80,000 hours, and I have the book of the same name queued up as well. The premise is that our jobs take up a considerable fraction of our lives, likely more than any other activity, and thus it behooves us to think deeply of how to spend that time. That’s hardly controversial, but applying logical principles and data-backed guidance to maximize the future impact can be (similar to discussions around Effective Altruism).

Personally, I find the arguments compelling. It’s why I’ll spend the rest of my time as a technologist dedicated to work in the public sector, and especially in the space of tech and politics. When it comes to maximizing human flourishing, good public policy is critical, and with a shrinking world, the risk of bad policy existential.

Many have died to give those who remain the chance to build this better world. Honored to be among the latter group; don’t intend to let it go to waste.

Wants What It Wants

Wants What It Wants

(I seem to open a lot of blog posts with variations on “I’ve written before about X.” Here’s another one).

Back in 2017 I wrote three posts in a row about the tension between giving users what they want and guiding them to what is best. They came to mind this week when I ran up against on of terraform‘s more annoying (lack of) features: the inability to apply changes on a per file basis. I absolutely understand why it’s not supported, but sometimes it’s a quick and dirty way to keep moving forward, and dang it, I needed it!

After a day of annoyingly overspecifying --target flags in my commands, I rolled up my sleeves and built a wrapping script that would do the work for me, essentially adding a --file flag to the CLI. I share it here as a service to the Infrastructure-as-Code community. Usage is easy:

terraform-files apply --file my-infra.tf

It’s hacky as heck, so use at your own risk, your mileage may vary, etc etc. And if you want to make it better, revisions welcomed!

It’s Never Five Minutes

It’s Never Five Minutes

There’s no magic method to software estimation that produces perfect results. Nor is it a skill that’s easily taught. For the most part, you just have to start doing it, make a lot of mistakes, and gradually you’ll get better over time.

That being said, here are a couple articles on the topic worth reading:

My own two (or more) cents is that individual task estimates will always have considerable uncertainty, but that can be mitigated by quantity. As long as there’s no systemic bias, the sum of estimates across a set of tasks will become more accurate as the set of tasks itself grows. And any bias that might exist (a tendency to underestimate is most common) can be compensated for by a final multiplicative factor (1.25 is a useful starting point, but can be refined over time).

Further, a diversity of estimates per task also increases accuracy, and directly proportional to the diversity of perspectives brought to it. Have front-end folks contribute to back-end estimates and vice versa; get estimates from both senior and junior level engineers; include people that cut across multiple projects. If nothing else it will promote collaborative conversations.

Here’s a process I use to achieve the above diversifications. It’s most applicable when estimating a project, but could also be used for estimating large features, though it’d be overkill for a single task. It can be done either synchronously or (mostly) asynchronously, and doesn’t take more than a couple hours.

  1. Put together a team of three estimators, ideally with a variety of skills and experience relative to the project being scoped.
  2. Each person reads through whatever project materials have been assembled so far (descriptions of the work, details on the customer and industry, overall objectives, etc).
  3. On their own, each person independently writes down in a few sentences their understanding of the objective of the project. This description should be non-technical and describe business outcomes, not implementation details.
  4. The group comes together and synthesizes their separate descriptions into a unified paragraph they all agree accurately captures the project objectives.
  5. Each person independently puts together a list of tasks to be completed based on that description. These should include everything needed to get to done, including not only implementation but design time, testing, bug fixing, deployments, documentation, and so on.
  6. The group reassembles and synthesizes each list into a unified task list, removing duplicates as needed, and discussing any items that aren’t broadly understood, until everyone feels good that the list is as complete and detailed as possible.
  7. The scopers separate once again, and on their own each person comes up with an estimate for each task on the unified list. Scopers should not get hung up too long on each task; give it a few minutes thought, make a best effort (err on the high side), and capture any assumptions or unknowns.
  8. Once everyone has independently estimated, come together again, and compare estimates task by task (as well as any assumptions anyone captured for that task). If there’s little variance across the estimates on a task, use the average as a final value. If there is disagreement on the estimate, discuss until a common understanding is reached. Reword the task, or split it into multiple tasks if needed, and then estimate those subtasks. If a consensus final estimate cannot be reached after a reasonable amount of time, the largest original estimate for the task should be used.
  9. Sum the final estimates for each task to come up with a total estimate for the project. Each person gets a chance to share their feelings on this total: does it seem right in aggregate compared to the original description of work determined earlier? If so, excellent. If not, re-review the task list and iterate on estimates again, either as a group, or even going back separately.
  10. Collate the list of assumptions and unknowns and discuss. Based on what is identified, decide on a final uncertainty multiplier. As a general guidance, add 10% if the list is short and/or simple, 25% for “standard” assumptions, and up to 50% if there are large unknowns or significant anticipated complexities. There’s no hard rule here; apply professional judgment.
  11. Apply this multiplier to the task estimate sum to get a final estimate for the project. Share it with stakeholders along with the project description and list of assumptions. Be prepared to defend the value, but also be open to questions and challenges.

I haven’t met anyone who truly loves software estimation, but it’s critically important if you want to be successful beyond hobbyist level. Put in the effort as a team and reap the benefits.

Remix

Remix

I’m three weeks into the new job now, and while in many ways it’s exactly what I expected, there have been a few surprise challenges. However, what we’re facing isn’t new to me. Though the details are always different (history doesn’t repeat itself, despite the adage), and I’ll never feel totally up to the task, my nearly 25 years of professional technical work were excellent preparation.

It isn’t just the years of experience, though. I’ve intentionally pursued a variety of situations, and through a combination of hard work, the graciousness of colleagues and bosses, and some luck, I’ve been in the positions I’ve needed to develop professionally and grow my career. I’m thankful for that.

Walking the line between unchallenging safety and ineffective overreach is not easy, but I advise erring on the side of the latter. It’s true there’s no compression algorithm for experience, one has some control over the speed and variety at which experiences are… experienced. And that’s an encouraging thought.

I have no doubt I’m right where I need to be. Looking forward to Monday and the chance to move the needle.

Don’t Repeat Yourself

Don’t Repeat Yourself

Technologists are generally pretty bad at understanding their own history. Understandable, given how quickly the industry moves, but still regrettable.

If you’ve ever written a line of JavaScript or called an API that returned JSON (and who hasn’t, they’re about as ubiquitous as tech can get), you should get to know Douglas Crockford. This interview with him covers a wide range of topics, including how JSON came to be (spoiler: as the antidote to XML), his perspective on JavaScript as a whole (and how it changed from his first impression), what it was like to work through the dot com bubble, and much more.

I also suppose I should study my own history better, because I’ve written on the topic of studying history several times before. Though not all forms of repetition are bad, right? Right?

Security Sunday

Security Sunday

I’ve been a daily user of YubiKeys since 2018. These little devices pack a hefty security punch with a number of useful features, including universal second factor (U2F), time-based one-time passwords (TOTP), static passwords, and personal identity verification (PIV).

This article contains an excellent overview of all the functions and how to use them. If you’re at all interesting in beefing up your security posture, I can’t recommend it highly enough.

Sean Maguire Was Right

Sean Maguire Was Right

Don Norman came to mind today, a not uncommon occurrance. I’ve mentioned him before, but not loudly enough. If you care at all about building great solutions, technical or otherwise, reading The Design of Everyday Things is a must. Here’s my favorite passage:

The idea that a person is at fault when something goes wrong is deeply entrenched in society. That’s why we blame others and even ourselves. Unfortunately, the idea that a person is at fault is imbedded in the legal system. When major accidents occur, official courts of inquiry are set up to assess the blame. More and more often the blame is attributed to “human error.” The person involved can be fined, punished, or fired. Maybe training procedures are revised. The law rests comfortably. But in my experience, human error usually is a result of poor design: it should be called system error. Humans err continually; it is an intrinsic part of our nature. System design should take this into account. Pinning the blame on the person may be a comfortable way to proceed, but why was the system ever designed so that a single act by a single person could cause calamity? Worse, blaming the person without fixing the root, underlying cause does not fix the problem: the same error is likely to be repeated by someone else.

The notion of system error is quite profound, applicable to technology, organizations, governments, even entire civilizations. Leaders of all stripes do well to consider its explanatory power.

Just discovered that Don has a new book coming next month: Design For a Better World. Pre-ordered!