It’s Never Five Minutes
There’s no magic method to software estimation that produces perfect results. Nor is it a skill that’s easily taught. For the most part, you just have to start doing it, make a lot of mistakes, and gradually you’ll get better over time.
That being said, here are a couple articles on the topic worth reading:
- Rules of Thumb for Software Development Estimations (love all this guy’s stuff)
- Estimating Software Projects (a four-part series)
- Fruit salad: a scrum estimation scale (sounds weird, but hear it out)
My own two (or more) cents is that individual task estimates will always have considerable uncertainty, but that can be mitigated by quantity. As long as there’s no systemic bias, the sum of estimates across a set of tasks will become more accurate as the set of tasks itself grows. And any bias that might exist (a tendency to underestimate is most common) can be compensated for by a final multiplicative factor (1.25 is a useful starting point, but can be refined over time).
Further, a diversity of estimates per task also increases accuracy, and directly proportional to the diversity of perspectives brought to it. Have front-end folks contribute to back-end estimates and vice versa; get estimates from both senior and junior level engineers; include people that cut across multiple projects. If nothing else it will promote collaborative conversations.
Here’s a process I use to achieve the above diversifications. It’s most applicable when estimating a project, but could also be used for estimating large features, though it’d be overkill for a single task. It can be done either synchronously or (mostly) asynchronously, and doesn’t take more than a couple hours.
- Put together a team of three estimators, ideally with a variety of skills and experience relative to the project being scoped.
- Each person reads through whatever project materials have been assembled so far (descriptions of the work, details on the customer and industry, overall objectives, etc).
- On their own, each person independently writes down in a few sentences their understanding of the objective of the project. This description should be non-technical and describe business outcomes, not implementation details.
- The group comes together and synthesizes their separate descriptions into a unified paragraph they all agree accurately captures the project objectives.
- Each person independently puts together a list of tasks to be completed based on that description. These should include everything needed to get to done, including not only implementation but design time, testing, bug fixing, deployments, documentation, and so on.
- The group reassembles and synthesizes each list into a unified task list, removing duplicates as needed, and discussing any items that aren’t broadly understood, until everyone feels good that the list is as complete and detailed as possible.
- The scopers separate once again, and on their own each person comes up with an estimate for each task on the unified list. Scopers should not get hung up too long on each task; give it a few minutes thought, make a best effort (err on the high side), and capture any assumptions or unknowns.
- Once everyone has independently estimated, come together again, and compare estimates task by task (as well as any assumptions anyone captured for that task). If there’s little variance across the estimates on a task, use the average as a final value. If there is disagreement on the estimate, discuss until a common understanding is reached. Reword the task, or split it into multiple tasks if needed, and then estimate those subtasks. If a consensus final estimate cannot be reached after a reasonable amount of time, the largest original estimate for the task should be used.
- Sum the final estimates for each task to come up with a total estimate for the project. Each person gets a chance to share their feelings on this total: does it seem right in aggregate compared to the original description of work determined earlier? If so, excellent. If not, re-review the task list and iterate on estimates again, either as a group, or even going back separately.
- Collate the list of assumptions and unknowns and discuss. Based on what is identified, decide on a final uncertainty multiplier. As a general guidance, add 10% if the list is short and/or simple, 25% for “standard” assumptions, and up to 50% if there are large unknowns or significant anticipated complexities. There’s no hard rule here; apply professional judgment.
- Apply this multiplier to the task estimate sum to get a final estimate for the project. Share it with stakeholders along with the project description and list of assumptions. Be prepared to defend the value, but also be open to questions and challenges.
I haven’t met anyone who truly loves software estimation, but it’s critically important if you want to be successful beyond hobbyist level. Put in the effort as a team and reap the benefits.