With the #NoEstimates topic still ringing in my ears, here's an interesting presentation from one of the supporters of this concept. It has many good concepts, one serious math error, and connects well with how we manage and work billion dollar programs.
Ignoring for a moment some of the bad math in the beginning - can't forecast the future, not true and done all the time with high confidence - there are good observations in his domain that connect to how we forecast the future cost and schedule in ours.
- The difference between cardinal and ordinal values (numbers) when estimating. This is a core issue in agile. Story Points are ordinal. They are not calibrated prior to use. Items can be calibrated and are therefore are cardinal. Reference Class can be cardinal - and must be to be useful.
- The notion of small increments is critical to success. Little Bets is the general notion, we've tired to apply this more. Rolling Waves, 44-day rule are examples in our domain. Not used enough by the way. His notion is very similar - and it'd be interesting to have a long conversation - to Reference Class Forecasting.
- The notion that the backlog grows at the same time as it is being reduced brings up a critical notion that is entering our domain. Define the needed capabilities before defining any technical details. The concept of Capability Based Planning is entering a wider realm.
- The statement what we release to customers is features not story points.This is the basis of our Integrated Master Plan / Integrated Master Schedule deliverables based planning process. This deliverables focus is baked into the DOD acquisition process. The presentation make this clear. Customers bought a capability. They didn't actually buy the code and certainly didn't buy story points.
- The notion of rolling wave is stated when he speaks about no worrying about future work that is not planned in detail. Rolling Wave is also baked into our domain with the IMP/IMS and DOD acquisition process.
- Decomposing the story points needs to be broken up. Just like Work Packages and Tasks using the 44-day rule. That means no work can cross more than one accounting period. It answers the question how long are you willing to wait before you find out you're late?
- Here's a serious error, Gaussian distributions ONLY work when the underlying processes are identically independently distributed (iid). This almost NEVER happens in projects. Everything is coupled, almost nothing is independent.
- The notion of using Story Points versus Items Delivered is interesting. What's needed is the size of the items delivered in units independent from the story points. It may be that the items delivered is a better indicator of final cost, but we need to know why. It may be a function of the statistical distribution of the work that naturally occurs when defining deliverables.
In the YouTube presentation from the previous post, titled Predicting the Future, there was a statement made about minute 9:08 where it is stated the only way that you can estimate is if you can look into the future and that requires precognition, which we don't have. For those of us earning our living by looking into the future, this brings a smile.
Starting with machines that look into the future every day. The airtraffic control system looks into the future and throttles the traffic to avoid congestion. Not so well many times because of externalities. But the algorithm of NextGen does this. Positive Train Control does something similar. These two programs I'm familiar with, having work on parts of them in the past. Guideing missiles to their target requires looking into the future to rapidly determine where the target is going to be a few seconds from now, and being there as it arrives. I'm very familiar with this type of estimating having written lots of code the guide that missile.
In the project management domain, forecasting future performance of the team's productivity and quality and taking corrective actions to avoid undesirable outcomes is what we do for a living. This field is called Program Planning and Controls (PP&C). We are PP&C for software intensive programs as well as things that fly, drive, or swim away using that software.
So let's start with the basis of how to forecast the future given the past, a model of the past, or some representation of the past. We make several assumptions. Without accepting these, it's a short discussion.
- The underlying dynamics of the system under control is describable mathematically. This dynamic nature can be stationary or non-stationary. But the underlying system has to behave in such a way that an equilibrium can be achieved at some point. Disruptive systems are difficult to control and making estimates in the presence of disruption is a waste of time.
- The basis of our forecast needs to contain a small number of parameters as possible needed to represent the source of the forecast. This is called parsimony. But care is needed here. This is NOT Occakm's Razor. That's a 700 year old notion long before Rev. Bayes came up with Bayesian statistics. So we need a model with the right number of parameters.
- Here's a notional example of what R can do when forecasting the future.
Where can we learn about all this stuff? For forecasting project performance - cost, schedule, technical performance, labor utilization, quality of product,defect absorption rates, etc. one place is Time Series Analysis and Its Application with R, 3rd Edition. This is my favorite book, because it has working examples of the analysis we need right there in the book, with the R code and the supporting math. And since we're essentially lazy, we can get to the answer without too much effort.
For those who don't like math in the same way they don't like estimates, well this is going to be a problem. Because to make credible forecasts of future behaviour from past behaviour - time series forecasting - math - statistics and probability actually - is required. No way out of it, sorry.
Now with this base established, what are so rules for actually making forecasts?
- We need indicators of performance. Leading indicators are best, but lagging will do if they don't lag too far back. Knowing that the horse has run out into the pasture by seeing her from the kitchen window is a lagging indicator. Knowing the horse has run down the road by getting a call from the sheriff is a little too lagging.
- We need to know what we actually want to measure. The really important point made in the video is that customers don't buy story points, they buy stories. Our core marketing brochure says it a little different. In fact customers don't buy stories, they buy the capabilities the stories enable. This notion of capabilities is not found very often in the IT world. But it is the essence of how we do business in the defense and related industries. The JCIDS is a place where capabilties are identified before any project work is ever done.
- The last thing is we must establish what kind of estimate we want for the future. What is good enough. If you're estimating the cost of a 8 week project that involves a few people, you're wasting your !@#$ing time. Don't do it. Listen to the #NE crowd and don't do it. But if you're estimating the cost, schedule, performance, weight of a flight avionics package for a Mars lander, then that's another thing.
Let's look at this example.
- We have an idea that we'd like to try out. Let's fly through the tail of a comet. Actually have the comet fly by us is more like it.
- How much does that cost? How long will it take to get a machine that will do that? Do we even know how to do this? It's never been done before by the way. How many people will we need on our team? What new physics will we have to invent to make this thing work?
- All good questions about the future. The unknown. It is a knowable unknown, but for now it is unknown.
- So here's the little rub. We need to ask for a sum of money that is not too much and not too little. How much money? We also have to have a credible plan to build this machine on a deadline. The comet is coming on a deadline and it is leaving on a deadline, and it is not coming back for a really long time. We also need to know estimates of our ability to produce on fairly fine grained boundaries. This is much more than a business estimate. It's a project success estimate. If we can't make the machine on time, we fail. We can always go beg for more money. So how many features are needed, how many can we build in the period of performance of the contract. This is called software development and oh yea hardware too.
- Here's the actual mission.
So now what? How can we develop a credible - see that term credible, it's important - forecast of cost, schedule, and needed technical performance. How do we do this? We build a model. Ignore completly those who will quote George Box that all models are wrong, some are useful. In fact ignore anyone who speaks about the use or misuse of models who does not actually have Time Series Analysis, Forecasting and Control, George E.P. Box and G. M.Jenkins on their shelf and has at least once opened it to look at the Box -Jenkins auto regression algorithm.
This is the answer. There are tools, tons of tools, some are even free (R for example), that can be used to make forecasts of the future.
Now back to the term credible. This means that there is a well defined confidence level for the forecast. The error band on that confidence level is also well defined. But the critical piece here is there is a causality between the past and the future, not just a correlation. In the YouTube presentation, near the end when he is showing the differences between Story Points and Items as an example of better forecasting, he states - and may not have even realized it - that explaining the reason why needs more work. Correlation does not mean Causality. Correlation models are very risks, because they are probably wrong.
So all this comes back to a core concept.
- We can forecast the future and do it every day.
- We may not need to forecast the future - make an estimate - if doing so costs more than the value at risk. The value at risk is a parameter of estimating. How much to spend on estimating is driven by how much we have to lose.
- We can only forecast short term items in most instances. But at the same time, we can forecast very long term items as well. I live in the town where several neighbors are climate modelers. They can forecast the weather a few days out. We live in the mountains, so mountain weather is sporty at best. They can forecast (estimate) years, decades, centuries out. They make estimates of the future given a model of the past. To say we need precognition is silly.
- But we do have to have some assessment of past performance. When asked how much will it cost to a person who has never in their life built a Mars Lander flight avionics package, you'd probably not have much confidence in their number.
The very notion of No Estimates fails miserably in the absence of a domain, a context, a value at risk, and other parameters. So without starting out with the applicability of the concept discussion, it rapidly goes in the ditch. But unlike some in the #NE camp, it is not up to those on the outside looking in the accept the concept in the absence of a domain and context. If #NE'ers want to move beyond their own tribe - and maybe they don't - some form of domain, context, and process is needed. The YouTube example is a very nice start. Which by the way doesn't say no estimates it says no story points.
There is an immutable set of questions that needs answers before we can determine the probability of success for our project.
- How much is this project going to cost when we are done?
- How long is it going to take to deliver all the promised outcomes from the project?
- How many people to do need to get the deliverable out the door for the estimated cost and estimated completion date?
- What is going to cause your project to get in trouble?
- How can you assure you are actually making progress in some way meaningful to the people funding the project?
These are fundamental questions for any project. By any project, I mean any time bounded, cost bounded effort to produce something new, needs to have some credible answers to these questions if you are spending someone elses money.
Spending your own money? No one cares. But spending other peoples money means you have an obligation to behave accordingly. So it all comes down to this.
When there is a suggestion of how to answer these questions, or even how to ignore the question, ask yourself this:
- Does this pass the smell test? That is does this even sound logical. No Estimates is one of those.
- What is the domain and context of the suggestion? When soemone says the domain is software, what software. A 5 week internal project or building the flight avionics suite for Raptor.
- Has the suggested approach been shown to improve the probability of success for the project? Was this improvement correlated with the success or was it the cause of the succcess. Many times there is correlation but no causality.
Pat Richard wrote the blog I wanted to. I've had a conversation of sorts with one of the "leaders" of the #NoEstimates movement who talked in circles when asked where is this method applicable outside your examples of a 5 week project and a cleanup of bugs on another project? The answer back was always if you're serious ask me better questions. This pseudo-Socratic method is tiring at best.
Pat's blog has some interesting ideas
The main argument behind #NoEstimates is that estimating is hard, imprecise, some even say wasted time.
I'd suggest this is because estimating is not well developed in the domains these people work. We provide estimates every month for software intensive programs. Software that has emergent requirements, and sometimes even No requirements that can be validated. The INTEL business does this. The enemy writes the requirements, not the buyers of the system.
The precision argument is a red herring. The wrong and mostly wrong headed assumption is that an estimate has to be precise. This is simply naive.
I personally believe that any estimation method, Agile or not, that assumes that past behavior is an indication of future behavior is very faulty.
From a deterministic point of view this is likely correct. From a probabilistic point of view the past is the forecast of the future. The challenge is to determine the model of the past that can forecast the future with the degree of precision needed to make a decision. I can forecast the future toss of a dice with ease. There is a 1 in 6 chance that a 4 will appear when I toss the dice a single time. Toss two dice a bunch of times and I can get a probability distribution of the values of both pairs. This is high school statistics.
Now estimating how long it will take to develop a piece of software requires a modeling process tailored to software development. There are many of these. COCOMO II, Price, and Seer come to mind. I just got off a seminar where SEI was speaking to the issues with estimating software intensive defense programs. QUELCE is their approach. We do this all the time too in the embedded system world. There is a reference design, a set of reference class forecasts for past projects. Tune the model to generate an estimate. Continue to tune the model to reveal hot spots that need more investigation.
There are issues with this approach of course, none the least of which is the buyers don't like the number and ask for a better number. This is the source of many Nunn McCurdy breaches for defense programs.
The bigger question asked by one of the more obstinate poster on #NoEstimates is what is the value of estimating. My gut reaction is this is a question coming from a programmer. It's not his money, he doesn't like doing estimates - probably because of some trauma of a past project where they held him to the estimate - and firming believes they provide no value.
Estimating the cost and schedule on projects where you are spending other people's money is called Governance. You may not like doing it, then that posters strategy was to find customer who didn't ask for an estimate. Nice work if you can get it.
We need to learn to estimate better. We need better estimating processes. We need to remove the political processes from estimating. All these provide a simple value to the project.
It's not our money and we are obligated to spend the money provided by the owner in a responsible and credible manner.
That conversation went nowhere. It seems there is a refusal to engage at that level. Instead the approach is like that of early XP (before Agile). Well you're simply asking too many questions, you need to try it and learn how I see the world. There was one poster that loved to use quotes from Yoda around our questions about XP. This sounds all too familiar.
Here's another good suggestion from Pat
One of the points made resonated with me and I believe would get the nod from most experienced project managers; estimate in small chunks (my wording).
This has a statistical basis. The variance on a long duration activity has lower confidence than on short duration activity. The inverse is true as well. If I have high confidence of the duration as a percentage, than the resulting variance on a long duration activity will be greater than on a short duration activity. 15% of 200 days is larger than 15% of 20 days.
Estimate small blocks of work
This result is a statistically better forecast.
At the end of the day we need to face up to an important problem in the project business. There are cost overruns, schedule overruns, and products or services that fail to meet their intended goals. But we also have to ask and answer a critical question - what is the value at risk? This means what am I willing to invest to find out how much I might loose if my estimate is wrong? If I have no estimate of the cost, I can't even start to have that conversation.
The #NoEstimate advocates have yet to answer a simple question - where is this idea applicable?
Until then, it's likely they will be a small community talking themselves into thinking this is a really good idea, but nobody is listening. On our large, complex, high risk programs, we need better estimating process. #NoEstimates is not the answer without first answering the question where is this applicable?
Chris Chapman has a nice post addressing some of the issues of #NoEstimates. This tries to explain why we should consider the approach of developing software without an estimate of the duration or cost. Let's look at these concepts in light of a software development project that is spending someone else's money. Commercial money or government money.
Hopefully these questions are not a surprise to anyone writing software using other people's money. So let's see if I can work through the concepts that Chris has presented, without completly pissing off the angry voices of #NE. Here's Chris's direct quotes.
- It starts from a premise that you don't explicitly need estimates to deliver quality software if you are capable of developing and shipping into production small slices of functionality. Predictability of output emerges from the teams fast-learning of the problem domain that in turn comes from doing the work in small batches.
- If we build small pieces of functionality, we'll have a reference basis of knowing how long it takes to build similar pieces. Or even how long it takes to build collections of small pience. This is called Reference Class Forecasting.
- Certainly the process of building small pieces, getting them verified and into the hands of the users is the best approach for all software projects. No one doubts that. Our complex world does this. Simple worlds do it better only because they have fewer moving parts and fewer interfaces to manage.
- So the last sentence provides the answer to the how much and how long IFF we know something about the project's scope.
- If not, we have no answer to how long and how much and we'll just be spending our customers money until we reach the end of the needed functionality, run our of time, or run out of money.
- This critical idea here is the construction of the predictability of outcomes. This is worth repeating. With the predictability of outcomes, we have the basis of estimating. If you choose not to estimiate, then you're throwing away good information and the ability to be a hero when your customer asks the right questions.
- #NoEstimates teams actively measure their output to determine if they are slicing story features small enough so that they can be rapidly implemented in working software. To use a weak metaphor, it's like a baker learning to cut balls of dough from a larger blob so that they turn into uniform loaves of bread or buns. Over time, skill increases and the baker becomes a predictable, stable system.
- Outputs are the only measurable. DOD IMP/IMS and 5000.02 measures outputs, using Measures of Effectiveness (MoE), Measures of Performance (MoP), Technical Performance Measures(TPM). This is then the basis of forecasting the future performance from the past, using the reference class of this past performance.
- We can now make estimates to complete (ETC), estimates at completion (EAC), the capacity for work - throughput - assuming there is a steady state process in place. This is done in other domains using the assessment of these measures within the upper and lower bounds and the defined compliance levels within these bounds. This is also the basis of Reference Class Forecasting.
- This builds on the long-observed improvements that are gained from applying Lean and systems thinking to knowledge work. In turn, this requires having an understanding of queuing theory, flow, constraints, batch sizes and complexity thinking. All this said, however, it isn't inaccessible: It just takes some diligent, hard work. No magic or silver bullets, I'm afraid.
- For simple queuing, Little's Theorem produces an estimate of how long you will wait in a queue. This is a simple but effective way to estimate the performance of the system, since the system not influenced by the arrival process distribution, the service distribution, the service order, or practically anything else.
- The flow and constraints provide us with the ability to forecast throughput and the resulting completion times for the input queue to be empty - the definition of done, assuming no rework.
- Oh by the way, the diligent, hard work, is what the professional Program Planning and Controls staff does every single day on the programs we work. It's called being accountable for our customer's money.
So Now For the Punch Line
Let's assume we work for a customer that has a governance process where budgeting for projects is part of everyday life. With the processes described above from Chris's #NE post we can easily answer: how much budget will we need to allocate for this project? And once you've got the budget authorized and allocated to the project, when can we expect to start returning value to those who funded the project?
This is one of those WTF result. The #NE paradigm, as described in the post, is standard incremental development, on fine grained boundaries, with sufficient reference class forecasting calibration to establish a basis of future estimates. Just like you'd find in any IMP/IMS, rolling wave, work package, Earned Value Management 0/100 Earned Value Technique (EVT) program we work for DOD, DOE, or NASA. WTF, This is how we do things. We calibrate the capacity for work - within each software intensive reference class, e.g. Avionics, Life Support, Communications, Rendezvous and Dock - then use those calibrated capacities, using a model, to construct the estimates for our future.
You can't have a forecast to completion - the estimate of ETC or EAC - without knowing the underlying capacity for work (assuming no rework). Get this measurement and you're all set to forecast the future completion data and cost (assuming constant dollars) IFF you know the number of items in the queue. These items are of course the Stories in the queue, if you follow Vasco's advice. They can also be Function Points, sysML features, Interfaces, even SLOC from memory constrained flight avionics Handel-C FPGAs
In the End
This has been a tortuous journey, exacerbated by some who poorly defined the very purpose using more platitudes than I've encounter in some time. One of these is the common platitude of simple minded agile that deliver early and often. Which is only the case when the receiver of the software - from the queue - can actually accept the software, AND the software doesn't age while waiting for it to be consumed. While some myfind this strange, in complex, interconnected system like those found in ERP, embedded processes, SW/HW integrated system, this is common. The order of assembly is critical.
A much better approach, using exactly the same processes, is to deliver as planned. The plan states the need date for the software and the order of the software. The whole notion of priorities of features is the basis of Capabilities Based Planning and systems engineering processes that are mandated by our procurement process.
But I can just here a few voices talking If this is working for you just move on. Which of course is complete BS, since every project, especially software projects, on the planet is troubled in some way. So it doesn't work for us, or for anyone. No one has a lock on the solution. Especially those without a sufficient understand that what they are saying is nearly identical to existing processes.
What has been will be again, what has been done will be done again; there is nothing new under the sun. - Ecclesiastes 1:9
- What has been - reference class forecasting for the capacity of work, based on actual performance of the work.
- What has been done, will be done again - relabeling the approach described by Chris in the post and Vasco's YouTube as #NoEstimates, is actually using Stories as the Basis of Estimate to calibrate the reference class.
- There is nothing new - of course there appears to be new, but without doing the needed homework - again guided by Chris's post and Vasco's YouTube - the vocal objection to questioning would not have resulted.
- Secure executive support for major issues. Initial project documentation, such as a project charter and communications plan, will classify your project sponsors and champions, their roles and responsibilities, and escalation procedures. Rely on that, but also position yourself for frequent project status meetings with executives.
- Keep communications with sponsors and key stakeholders at a level that allows you to reach out to them when you may need them.
- Be aware of your project environment at all times. Regularly review project plans against where you are and what's planned to come. It will help minimize the risk of an issue arising when you least expect it — a resource pushed to the point of no return, for example.
- Look for lessons learned. Review the project history for potential concerns you may want to monitor and document in your risk log. Meet with other project managers in and outside of your organization to learn about pitfalls they may have encountered and how they handled them.
Skunk Works came about in 1943 as Lockheed (as they were know as then) was working on the first jet fighter. Kelly Johnson was a young engineer on this program. He outlined his 14 rules & practices to guide the teams. Some of the key points include:
- The manager should have practical control of the program (think product owner)
- Use a small number of good people
- Use very simple drawings with flexibility for making changes
- Minimize reporting to what is important
- Mutual trust between the military project organization and the contractor
Now some of the other rules related to the relationship between the vendor and the government, but this list sounds similar to some points in the Agile Manifesto and Guiding Principles. I don't know if the work of Kelly Johnson influenced the people that wrote the manifesto, but it's clear the ideas have been around for a while. And as far as a track record, the original fighter plane (XP-80) was designed and built in 143 days, and the U-2 has been operational for almost 60 years.
- The silo approach:Avoid developing the schedule in "silo" mode -- that is, without input from key stakeholders or subject-matter experts that can validate and confirm the schedule's content.
- Inappropriate tasks decomposition: When decomposing work in tasks, avoid an overly detailed decomposition or under detailed decomposition. In my experience, each task should not be shorter than two days and not longer than two reporting periods (typically two weeks). A task of this length is generally explicit, focused, actionable, assignable and traceable.
- Too many milestones: Limit milestones to significant project events or decisions -- for example, the project start, completion of major deliverables or phases and the project's end.
- Overly ambitious schedule: Everyone wants to please the customer, but an aggressive schedule can have the opposite effect if unrealistic deadlines are continually missed. Instead, aim to exceed expectations by delivering the project in a realistic timeframe, with solid execution. If you inherit an overly ambitious schedule, you could "fast-track" (i.e., make work parallel) or "crash" the schedule by assigning more resources to reduce task duration.
- Loops:A project schedule is not a flowchart, and time cannot flow in reverse. Therefore, loops, a circular task dependency, are not possible or validated by most project management scheduling software. Do not confuse loops with iterations. Iterations of tasks --such as design, implement and deploy -- are allowed on a project schedule.
- Danglers:These are the dependency links between tasks. Only one task will have no predecessor (the project start task) and only one will have no successor (the project end task). All the other tasks should have a successor and predecessor.
- Confusing tasks efforts with schedule time:Don't just ask team members: "When can you complete this task?" Instead, ask for the estimated effort to complete the task in labor hours or days. Then, transform the effort into work periods (the work days the team member can carry out the task) and map this to the project calendar (considering business days, holidays and vacation periods).
- Allocating schedule buffer instead of effort buffer:Don't allocate buffers to a certain schedule time. Task estimation is a three-step process: effort, duration and required calendar time. Allocate buffers primarily on a task's effort and not on the overall required calendar time.
- Depending on overall buffers:Avoid relying on sweeping buffers, like the classic "20 percent." When assigning buffers, consider the project-specific risks (for example, unfamiliar technologies), the experience of the project team and non-project related factors, such as resources allocated to parallel projects or team members' involvement in non-project activities.
- Wrong tasks on the critical path:Project management tasks, effort estimation tasks and other schedule planning activities should not be located on the critical path. They have nothing to do with the actual project work tasks.
The discussion around #NoEstimates was just about to come to an end, when Vasco posted a nice summary. Here's my take on his post in the domain and context of large, complex, mission critical, enterprise class software intensive projects.
Vasco's topics for the #NE rationale are below. But these phrases can be found on the programs we work as well.
- Focus of Value, Manage the Risk
- Value is defined by fulfilling the capabilities needed by the business or mission.
- These capabilities provide the ability to do something of value.
- The units of measure for these capabilities is Effectiveness. This means how effective is the provided technical and operational items at meeting the needs of the user.
- For a business capability, the MoE's can be measured in dollars, customer retention or satisfaction, inventory turns, and things like that.
- For mission capability, these are usually measured by the user. For example in a ground attack aircraft the technical requirement of holding a 7G turn, provides the capability to get back on target rapidly
- Discover the Product, Measure the Throughput
- The Product is the outcome of the project. It can also be a Service.
- The delivery of products at the planned time of need is the role of the Integrated Master Plan and Integrated Master Schedule (IMP/IMS)
- The IMP/IMS can be notional and represented by sticky noteson the wall, as long as the actual deliverables represent the delivery of a capability
- Measuring progress of these deliverables is done with Technical Performance Measures. These define and measure the technical or operational performance of the product or services (or the sub components).
- Use data you have, Measure Continuously
- The measurements start with Measures of Effectiveness, Measures of Performance, Key Performance Parameters, and Technical Performance Measures.
- These terms will have different meanings in different domains.
- For the light weight, small team, short duration, low risk, low cost project the terms can be developed with face to face discussions, hand waving, white board drawings, etc/
- For complex, large, distributed, high risk, high cost, mission critical projects, more formal methods are needed.
In the End
In the end, if I read Vasco correctly, he's proposing a set of Principles for managing the development of software using #NoEstimates. But I don't see anywhere in this informative post about No Estimating. But there are some inverted logic statements
don’t focus on estimating when the development will be finished, instead you let the rate of development (Throughput) inform your view on when the MVP will be ready.
But if you have your capacity for work and you know the remaining work then you have the ability to estimiate the completion date.
If you intentionally don't do that, well what can I say?
The suggested approach Vasco states in the post is good project management and follows the Five Immutable Pricnples, Practices, and Processes needed to increase your probability of project success. Can't go wrong with his suggestions. Can't go wrong with the 5 Immutable P,P,P's.
But where is the No Estimates part? Espically since measures your capacity for work and knowing something about the remaining work gives you the ETC/EAC for free. Is it just a stuborn you can't make me do estimates approach? That can't be.
So using the topics in the post, how does making no estimates support the delivered value.
- Connect #NE with reality of projects?
- I know #NE'ers don't want this to be a process or a method, but how can it be put to work?
Here's the quote that tells it all
Lao Tzu was a 6th century BC philosopher who's pithy quotes are used an misused through civilization. The core problem with Tzu's quote here is he never took a probability and statistics class. Nor did he met the statisticians George E. P. Box and Gwilym Jenkins and their Box Jenkinsmethodology.
It may be that the original poster didn't take that probability and statistics class either, or may have never heard of the Box/Jenkins method. There are others in the #NE group that assume that probability and statistics applied to spending other peoples money is cryptic and unnecessary.
Well are here some facts.
- When money is spent that doesn't belong to you, you are obligated to tell the people who's money it is what you are going to do with it. This seems to be a common sense approach to providing value for that money.
- When spending other peoples money, it is a good idea to know how much money to ask for. Asking can be incremental or it can be all in. But in the end, the person with the money likely needs to know how much money is going to be needed to get the things - capabilities - needed in exchange for that money.
So back to the quote. Since Tzu would have been unaware of Drs. Box and Jenkins, the quote sounds logical in 6th century B.C. Of course it's not logical, since the realm of probability and statistics entered the vernacular in the mid-1600's, with the introduction of laws of evidence. Early there were games of chance in Greece, but no notation for writing down the rules. Things really happening in the 18th century Jacob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's The Doctrine of Chances (1718) put probability on a sound mathematical footing, showing how to calculate a wide range of complex probabilities. Good statistical thinking started in the mid-1700's with the collection of demographic and economic data and its analysis.
What Box and Jenkins did was develop an algorithm for forecasting the future based on the past. Yes, Virginia there is a Santa Claus. We can forecast the future given the past. We do it every single day, every single hour of the day. So when I hear you can't forecast the future, that person must not have attended a class where probability and statistics was taught. I pray that the current Computer Science courses teach probability and statistics. And those graduates move on to writing code for money with the understanding of the basis of the underlying methods for making statistically sound forecasts using Box/Jenkins.
But back to the problem at hand. Given past performance - Vasco's very clear and concise description of measuring Storiesproduced (instead of Story Points) is the basis for measuring the past. This is many times called Reference Class Forecasting (RCF). RCF is used in a wide variety of domains, from Oil & Gas exploration (where I first encountered it), to economics, to heavy construction, and the estimates for cost and schedule for software intensive projects - where I work now.
Why Is This Not Understood?
Why the blanket statement you can't forecast the future or forecasting - same as estimating - is a waste of time? I really can't say. But I sense it comes from bad experiences being on teams where estimating was badly used, by badly performing managers, for all the wrong reasons.
But that is absolutely no excuse for not understanding the High School level probability and statistics behind using a time series of the past - which Vasco has described in detail - to forecast the range of possible outcomes for the future. Notice the notion of range. No 100% - that's simple BS - and anyone suggesting so is ignorant of all mathematical modeling processes. But confidence ranges, with error bands on the confidence are how daily estimates and forecasts are made. From how many apples to stock at Whole Foods, to the flow control of the air traffic arriving in the Denver TCA, to the weather itself, to the Estimate At Complete for our software projects.
Here's one of many posts about probabilistic foreccasting. And a simple algorithm.
- Follow Vasco's advice. Break down the work into small chunks. Small enough to have them have approximate representation of the actual work. This is called binning in the statistical sampling world.
- Perform the work.
- Calibrate the work effort with the time it tool for the work. The result is a cardinal number showing the capacity for work.
- Look at the backlog of work, calibrated for the same cardinal bin size
- Capture the actual performance over the past and put that in a time series/
- Download R and use that time series to forecast the future work, using parameters of your choice. But using the Box/Jenkins ARIMA function built into R.
It's that simple. For hot shot, walk on water software developers R and the process should be a walk in the park. But you have to want to do it. You have to want to have a credible answer when those with the money ask how much and how long. And you have to come to see that statement like making estimates inhibits innovation as not just simply nonsense, but provocateur nonsense.
That's How You Forecast the Future and act like a responsible provider of value to those with the money.
One of the suggestions in the #NoEstimates community is that you can't forecast the future. They confuse estimating with forecasting at times, but estimating a variable is a single point forecast. Forecasting can be both single point or time series forecasting - that is a series of value in the future generating by the forecasting engine.
There are links below to some of the original work on time series forecasting. But before that, the notion of a time series is very powerful for project management work. And developing software for money is project management work. The concept that the past is a indicator of the future is at the heart of all estimating and forecasting. If you believe you can't forecast the future using the past and needed adjustments - this is Bayesian statistics - then stop reading here. And BTW that Bayesian statistical forecasting is baked into all you do, consume, interact with in daily life. From the prescription drugs, to air traffic control, to the emissions controller on your car.
Poor Mr. Tzu
Had Mr. Tzu been able to read these works, it is unlikley he would have stated...
Those with knowledge don't predict and those who predict don't have knowledge
Instead, he would have likely understood that in a non-chaotic system, future behaviour is derived from past behaviour. And if we observe the past behaviour, we can construct a productive model of the future, once we have categorized the drivers of that behaviour. The is the Box Jenkins contribution.
If however the underlying system is chaotic, forecasting the future is sporty at best. But if we working in a chaotic environment, while spending other peoples money, we're doomed to fail from the start. This is the basis - weak false basis - of Taleb's Black Swans. If they are applied to writing software for money, you're in trouble from day one.
This post is not going show you how to do this. Mainly because a Blog post is too small by several orders of magnitude to address the principles and practices of statistical forecasting. Instead here is a list of books, some free in PDF, that show how to do forecasting and most importantly how to do it using tools.
- First Course in Statistical Programming with R, W. John Braua and Duncan Murdoch
- Introduction to Time Series and Forecasting, 2nd Edition, Peter Brockwell and Richard Davis
- A First Course on Time Series Analysis, Using SAS, Chair of Statistics, University of Wurzburg
- An Introduction to Applied Multivariate Analysis with R, Brian Everitt and TorstenHothon
- New Introduction to Multiple Time Series Analysis, HelmutLutkepohl, Springer, 2005
and the final one all those failing you can forecast the future...
- Introduction to Time Series and Forecasting, 2nd Edition, Peter J. Brockwell and Richard A. Davis, Springe Texts in Statistics, 2002
Here's some of the original papers on time series forecasting as well.
- Distribution of Residual Autocorrelations in Autoregressive Integrated Moving Average Time Series Models, G. E. P. BOX and David A. Pierce, Journal of the American Statistical Association, December 1970, Volume 65, Number 332, Theory and Methods Section
- On a measure of lack of fit in time series models, G. M. Ljung, College of Business Administration, University of Denver, Colorado and G. E. P. Box
- Science and Statistics, George E. P. Box, Journal of the American Statistical Association, December 1976, Volume 71, Number 356, pp. 761-799
- Forecasting by Extrapolation: Conclusions from Twenty-five Years of Research, J. Scott Armstrong University of Pennsylvania. 11-1-1984
Here's the Point - Again
When someone makes a statement like you can't forecast the future or you can't estimate without looking into the future, you'll now know - after reading at least one of these books or papers - that the statement is false in principle and most likely false in practice.
In principle it's simple. Forecasting the future is done every single day in a wide variety of domains, including software development. In practice, there are some assumptions that need to be addressed, but those are addressed in the books. The primary one has to do with disruptive events in the future. Taleb has made a living speaking about Black Swans and they can't be seen coming and therefore we can't forecast the future.
In practice, we need to keep track of our performance, as Vasco describes. With this time series of past performance, we can start to make some statement about the future. These statements are probabilistic in nature, with statistical bounds - confidence intervals.
This Black Swan analogy has one problem first. Australia has Black Swans everywhere. We don't here, neither did England when that phrase was coined. That aside, the next problem is with software projects, any Black Swans come from not looking for actual problems. Or if the project is truley chaotic, run away as fast as you can, you're going to fail no matter how hard you try or how many #NoEstimates you don't produce.
One of the posts on Tweeter's #NoEstimates is a presentation of bad forecasts. This is one of the analogies like many analogies that is interesting but fatally flawed.
First, forecasting political, technical, economic outcomes is not the same as forecasting the completion date or cost of a software development effort. The former are usually based on chaotic systems. If the latter are based on chaotic system, quit the project, run away, and go find a better project.
So let's see what the facts are behind each of the pages.
- Fooling Around with Alternating Current is just a waste of time - Edison and Tesla were in a piched battle using J. P. Morgan's money to capture the eletriclitghting market. This is a marketing pitch, not a suggestion that AC was wrong. In fact, when Morgan dropped funding and forces Westinghouse to absorb the technology, AC took off. The Current Wars provides some background, with lots of references.
- The coming of the wirless era will make war impossible, because it will make war ridiculous - Marconi would not be considered a leading authority on statesmenship or world affairs. The fact that he belived this shows the naviety of his understanding of the politics of the era.
- There is no liklihood man can ever tap the power of the atom -
- You will be before the leaves have fallen from the trees - start with some history - As German troops crossed the Belgian frontier on 4 August 1914, most people in Europe believed that the "boys will be home by Christmas." If they meant Christmas 1918, they were right. But of course, no one believed the war could possibly drag on so long. Previously, various authors had opined that, due to the massive expense of modern war, any future European hostilities would be short. Many people believed that assessment, but they forgot about one important thing: credit. No, there wasn't enough gold in the world to pay for a long war with modern weapons; there was, however, enough credit to pay for nearly anything. This was a disruptive event, with underlying chaotic processes. Forecasting in the presence of chaos is usually disappointing.
- A rocket will never be able to leave the eath's atmosphere - The New York Times writer got a F in the physics class. Robert Goddard of the Goddard Space Flight Center, Greenbelt, MD, where we worked on the Hubble Robotic Service Mission rendevous and dock software described how to fly to the moon, in the late 1800's Be careful of the source of the forecast. That source may actually be incompetent.
- It will be years - not in my time - before a woman becomes Prime Minister - will she was in the right place at the right time. Another disruptive event enabled by chaotic processes.
- We can close the books in infectious diseases - the Surgeon General was asleep in the microbiology class. This was a political statement not a scientific statement.
- Rail travel at high speeds is not possible, because passengers, unable to breath, would die of asphyxia - Dr. Dionysys Larder would have gotten a D in the physics class. Like some stating you can't forecast the future would get a D in the statistics class.
- Democracy will be dead in 1950 - anyone forecasting political outcomes is a fool at best. Nate Silver has showed us how to do this in modern times. His Signal and the Noise should be mandatory reading for any #NE self-proclaimed pundit on how to use probabilistic forecasting so read about the math behind for forecasting.
- Stock Prices have reached what looks like a permanently high plateau - anyone believing that stock prices can be forecast as individual trading instruments needs to get connected with reality. First buy the index, second never chase the tape. Black Swans are disruptive events, so always hedge your bets. But at the same time Big Data and time series analysis is the basis of all portfolio trading strategies. But the quote is naive at best looking back at 1929, when statistical forecasting of the market was nowhere to be found. John Maynard Keynes A Treatise on Probability was the introduction to this discipline. So again for those conjecturing that the future cannot be forecast, go buy Keynes book, read it, and come back with the counter arguments that are mathematically sound to refute his thesis.
- I see no good reasons why the views given in this volume should shock the religious sensibilities of anyone - well Darwin wasn't connected with the reality of the times was he?
- The Beatles have no future in show business - this is why there are more than one agents and record companies. This has nothing to do with the forecasting of the future, just personal opinion. Uninformed in this instance, like many opinions on #NoEstimates, without any factual backup or external references.
- Remote shopping, while entirely feasible, will flop - Time magazine is probably not the best authority on what will or won't happen with technology in 1968. Circa 1968 was the IBM 360 with batch processing. CICS had not yet been developed and the telecommunications links were SDLC. If you read James Martin'sThe Wired Society, circa 1978, you'll see how that uninformed opinion was replaced but those actually capable of forecasting the future. The technology was not present, so no wonder they - writers at Time - would think that. Yet another example of not actually a mathematical forecast, but opinion of one source.
- Ours has been the first [expedition], and doubtless be the last, to visit this profitless place - so much for environmental awareness. When the Powell Expedition arrived 8 years later the opinion was much different.
So what's the point?
- Forecasting the future is sporty business. Not for the uninformed, inexperienced, or unskilled.
- You need to be qualified, not opinionated. If you have an opion, bring some facts along with it. It makes the conversation go a lot better if we can talk about facts not just about you.
- You need to be able to separate the sources of uncertainty, before starting the forecasting and estimating process. The categories of uncertainty start with two major divisions:
- Reducible - epistemic - systematic uncertainty, which is due to things we could in principle know but don't in practice.
- Irreducible - aleatory - statistical uncertainty, which is unknowns that differ each time we run the same experiment.
- If you can't tell the difference, go find out the different before forming your opinion.
The #NoEstimates discussion seems to be stuck on exploringand questioning everything. To my sensibility, this is called taking out your brain and playing with it. Nothing really actionable, lots of go look for dysfunction, lots of restating the obvious - don't do wasteful things. Really I never would have thought of that.
Here's a process loop that has served me well for a long time. This is from Statistics for Experimenters, Design, Innovation, and Discovery, George E. P. Box, J. Stuart Hunter, and William G. Hunter, 2005. The first edition is 1978, so these ideas have been around a long time.
With the search for dysfunction and challenging everything approach (which by the way is trivially easy compared to actually making the improvements. It's like pointing out "you're fat," but not actually having any advice on how to lose weight), the next step is to deduce what the root cause is. And the root cause to NOT estimates, it's bad estimating processes that produce bad estimates, used by Bad managers.
With data and facts and a domain and context, the corrective actions to improve the probability of project success can be assessed. In the end it's this probability of success that must be the source of all we do in the project management business.
So here's the final set of questions for anyone suggesting improvements needed to increase the probability of success:
- Can the business owner understand the commitment needed to receive the value from the project that is using #NE or any form of estimating or not estimating
- For the project manager, some type of answer, any answer will do to the question what will this cost at the end and along the way and when are you planning to be done. These answers of course have to be probabilistic answers.
- We use the phrases we plan to be done on or before the 3rd week in November, 2015 with an 80% confidence
- We plan to come in at $560M or below with a 75% confidence
- Conversations like this are what business people like to have. And since the business people - the Program Manager, the Director of IT, the field operations manager - have other managers asking them these questions, any suggested improvements to the discovered dysfunction need to provide some form of an answer. One that is actionable in some sense to move foreward.
- If the corrective action to the dysfunction can't, doesn't, or simply refuses to provide this answer, I'd conjecture those paying the bills will have little interest in the idea as the source of that corrective action.
The value of telling stories is nothing new, and is probably as old as language itself. Stories have energized and captivated the attention of humans throughout history.
As with just about everything, storytelling itself is undergoing a type of transformation these days, fueled by the internet.
Dale Carnegie found that people could feel the most comfortable talking about themselves. Indeed, what topic do we know more about than anyone else than ourselves?! In addition to our comfort level, we can also tell a compelling and unique story that has the potential to captivate our audience – because it comes from our heart, from our own personal experience, and because it is something that no one else has experienced in quite the same way.
The difference today is the internet, which takes us beyond a room full of people onto a geometrically expanded worldwide stage. But feeling comfortable, telling a compelling story, capturing attention – these have not changed. Just the stage has.
John Reiling, PMP
PM Training Online
As regular readers will know, I’ve been on maternity leave since January, working on The Parent Project. During that time I’ve still been blogging and writing for my Otobos clients on a part-time basis, supported by my wonderful family, and my new book, Shortcuts to Success, was published. But now it’s time to take it up a gear and get back to work properly.
Here are my 5 tips for returning to work as a project manager after maternity leave or a career break.
1. Arrange a handover
If you had someone cover for you while you have been away, make sure that you get a proper handover with that person. Don’t let your cover person leave too soon. Keeping your cover person on when you return to work is a cost for your company, so expect them to want you to do as short a handover as possible. However, two weeks is good, if you can negotiate it.
Use the handover time to get them to introduce you to any new project stakeholders or team members and to review the status of your projects.
2. Process your paperwork
Returning to work involves paperwork (doesn’t everything?). If you have worked your Keep In Touch days during maternity leave (in the UK you are entitled to 10 days of paid work without losing your maternity benefits – these are KIT days), then get your KIT paperwork in so you get paid.
Work out your holiday allocation as you’ll probably be returning part-way through the holiday year. If you were entitled to accrue holiday or to any paid leave during your time away, do the forms for that too.
Many people return to work part-time after maternity leave, so if this is a consideration for you, get your request for flexible working in as early as you can. This gives your manager plenty of time to review your case and make a decision about whether they can support your request for flexible working. Remember, in the UK your manager has an obligation to consider your request but they are not obliged to accept it.
3. Accept that your projects have moved on
As much as you’d like to slot right back in where you left off, that isn’t going to happen. Your projects have moved on, so accept it. Some of your projects may even have finished, and you could have missed out on the project closure or even the celebration!
That might leave you worrying about what you are going to work on next. You might be picking up a project in progress, or starting completely new work. Either way, you’ll have to get your head around the fact that your old project teams might not need you anymore.
4. Trust your skills
If you are returning from maternity leave, it can feel like a crisis of confidence. After all, you’ve been out of the workplace for anything up to a year (and in some countries it could be even longer). In that time some days your biggest achievement has been making sure everyone is up, washed, dressed and fed. How will you cope going back to the office? Will you remember your passwords? Or even your own phone number?
Trust your skills. For the last 9 months or so you’ve been project managing a family in transition. You can do your job – you have been doing it, albeit with different stakeholders. So chill. The office will have you back with open arms and you’ll fit right in.
5. Take it easy
Your priorities have changed. Whether it was a career break or maternity leave, you aren’t the same person that you were before you left. Whether you return to work full-time or part-time, be kind to yourself, your partner and your family. Take it easy and manage your return to work as a gradual transition.
What other tips do you have for people returning to work after a break? Let us know in the comments.
Background credit: Zinzibar
Copyright © A Girl's Guide to Project Management [5 tips for returning to work after maternity leave], All Right Reserved. 2013.
Seems there is a seminal moment in the #NoEstimates conversation. The spectrum of questions - without a lot of answers yet - comes down to two ends of the spectrum
- I give you a fixed budget and you tell me what I can get for that budget in terms of MVP (minimally viable product)
- I tell you what set of MVP's I need to go to market and you tell me when you'll have those done.
The spectrum in between needs to land on one of these for this very simple reason.
It's not about the developers wanting or not wanting to do estimates. It's not about exploring new ways or pretending you're Yoda and asking inverted logic questions or blocking people when you don't like the tough questions NOT being answered.
So it's this
The customer has a finite amount of money. The customer has a need date. Both of these should have a range of values and a confidence interval on those values. The conversation goes something like this
I need these MVP's - more or less - on or before this date - more or less. Let's see if I can get some sense of that that is going to cost, so I can make some decisions about if those are viable dates and viable sets of MVPs.
That is I need to make decisions about how I am going to spend the money I've been given for this project or go back and get more money and relieve on the need date.
In the end it's all about making decisions. Not the decisions made by the developers - that comes later. But the business decisions about the project. About the cost and scehdule of the project. About the needed capabilties produced by the project. About the business value delivered by those capabilities.
And in the end all business decisions land on money and time - much and when?
So how can I make a decision about my project if I don't know:
- How much it is going to cost,
- When it is going to done, and
- What MVPs I'm going to get at the end of the money and time?
or some combination of those three. Without knowing something about the time and cost, the business owner - the owner of the money - will have a hard time making decisions about the value. When this value will be available, and most importantly about when the produced capabilties will deliver that value.
All the gnashing of teeth about #NoEstimates MUST be able to answer those questions if there it is going to be interesting to those of use tasked with managing the spending of other people’s money. All other conversation topics are moot until there is some viable explanation of how #NE supports in some way, anyway, progress toward answering those questions.