This is Part III, continued. If you landed right here, it is recommended that you read Part I and Part II to grasp the entire concept.
Enjoyed Fermi tasks? They are present around us, plenty of cases to get trained on. After we know how to observe the Unknown from many perspectives, convert observations into the numbers, classify observations into uniform sets, then applying Fermi calculations within the sets, and assemble the estimate for entire scope – time to pay more attention to the accuracy of the numbers. Probabilities really help. But in this Part III we will pay attention to other hidden tools, that gave us more wisdom. They will be described below, unordered, just section by section. Within every section I will bring the practical application, so that we are all set by the end of this post.
How many alternative estimations did you do?
The principle of least effort postulates that animals, people, even well designed machines will naturally choose the path of least resistance or “effort”. There is a whole theory that covers diverse fields from evolutionary biology to webpage design. Direct relation to the estimation is that in most cases there is only one alternative. Even if few of them, then they are for sure variations of the primary one. Because it was easier for people who did it. It was easier for you too. People are lazy, hence estimations suck. There was not enough independent alternatives, sufficient for comparison at the end, when you judge for final numbers (via various expert judgement methods). Hence, always force at least 5+ really independent paths of analysis and calculations to ensure your stuff does matter at the end. Use managerial power to make people produce more independent alternatives if you are manager. Force yourself to do so, if you are working solo virtuoso. One more trick, is to put different start point and let guys follow the least resistance path:) Your goal is to get alternative estimations, so apply such tricks to achieve it.
Paths of least resistance.
The magic of One.
First-digit law states that in lists of numbers from many real-life sources of data, the leading digit is distributed in a specific, non-uniform way. According to this law, the first digit is 1 about 30% of the time, and larger digits occur as the leading digit with lower and lower frequency, to the point where 9 as a first digit occurs less than 5% of the time.It is known as Benford’s law. It is observed pretty often, though not all real-life sources obey to it. Partially it is related to dynamics, because somewhere many year ago it could work for some certain data, or will work in the future, when humanity adopts (or augments) the numeric system.
Examining a list of the heights of the 60 tallest structures in the world by category shows that 1 is by far the most common leading digit, irrespective of the unit of measurement. Same could be told about the car weight (in metric system). People’s height is 1m and less than 2m (in metric system). Ticket price for trans Atlantic flight is often 1K and smth. So, it is ubiquitous.
Does it emerge in programming? For sure yes! When you calculate the volumes of code, data, classes, methods, number of people in organization etc. But we are interested in estimation. So, in estimation it is also observed, when you have first digit “1” prevailed, either in man/days or man/hours or cost expressed in money (e.g. 10,000+, or 1,000,000+ in dollars). Hence, always check what is your first digit, and if not “1”, then do double-check why not. Probably it is OK. But you will know it for sure only after validation. Otherwise, if you applied sufficient number of orthogonal calculations and your numbers confirm Banford’s law, then consider it as additional argument that you did right!
Distribution of first digits (in %, red bars) in the population of the 237 countries of the world. Black dots indicate the distribution predicted by Benford’s law. More details and diagrams on wiki.
Sounds familiar? Overhead 2x in comparison to initial estimation? This pattern has emerged in computer world, at least it became popular in recent 25 years. “The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.” Tom Cargill, Bell Labs. The pattern is called 90/90 or Ninety-ninety rule. It expresses both the rough allocation of time to easy and hard portions of a programming project and the cause of the lateness of many projects (that is, failure to anticipate the hard parts). In other words, it takes both more time and more coding than expected to make a project work. How it helps you? Just multiply your number by the factor of 1.8. It increases probability of successful execution within estimation (efforts/cost/budget, schedule). It could be applied at the stage, when you obtained estimations for subsystems. Especially when you get estimate with probability p50, use 90/90 to increase probability to p80 and beyond. The rule could be applied on top of all as well. It can’t be distilled to the exact reasons, it it just observable, hence respect it and use as a tool for fine-tuning your numbers.
This is my favorite. As aircraft constructor Antonov said: “Beautiful aircraft flies perfectly”. It is my translation, but the essence is unchanged. Beautiful things work better. If some thing works pretty well but looks ugly, it means there is an opportunity to improve its design. If some beautiful thing works good, than probably it is it, the thing is at its evolution end. There is no way to improve its design. Back to estimation and computer systems. If the architecture (component, modules, network, database, deployment etc) visually sucks, then it means two things: representation sucks or the subject sucks itself and must be improved. Elegant solution wakes up positive emotions. Your clients and partners will prefer elegant and beautiful solutions, even if they clam they need smth quick and dirty. It is bullshit. We all people, and when we go to the store, we do not buy quick and dirty shit, neither pants nor shoes. We select smth better, according to our taste. When we go to the launch, we do not eat dirty… we probably use to eat quickly, but definitely not dirty meal. You’ve got the idea. That Unknown that you got for estimation should be perfectly beautiful now, because you dissected it with your analysis, saw the true picture, understand the goals, desiderata, limitations, alternatives etc, build elegant solution and estimated it properly. If not, then your analysis did not succeed, you overlooked smth important. Return back and look and the unknown again. Until you see the beauty. After that you could estimate.
It is possible to estimate quickly. Just learn from the mother Nature, respect emergent pattens, recognize them in your computerized business, work, being. We can not explain those patterns, neither prove them nor decline. We are able to observe them and agree they were, they are, they will be around and within us. As you probably understood, they are applicable not solely to the estimation, but to everything. Estimation binding was done to easy your estimation exercises.