Recently I interviewed the authors of A Practical Guide to Dealing with Difficult Stakeholders*. They shared their insights into why managing stakeholders on projects doesn’t always work out as the books would have you believe. Today it’s the turn of Jake Holloway.
Hello, Jake. What’s the difference between the textbook approach to managing stakeholders and your approach?
Standard project management textbooks assume that stakeholders are universally compliant, rational and available.
You mean they’re not??
The reality is that some of them will not even read emails or attend meetings, and are completely irrational! They might even be completely opposed to the project for any number of personal and professional reasons.
The reality is that stakeholders are people, which means that they are potentially Irrational, selfish, tribal, and proud. They like authority, influence, money, status. And sometimes they don’t like other people, and they don’t like change.
Think of our approach as bringing a political and social dimension to project management. Like Machiavelli did with The Prince.
Well, there’s a gap in my literary knowledge exposed. I’ve never read The Prince. Tell me about the most unhelpful stakeholder you’ve ever worked with.
We have all had very demanding sponsors/steering cos – so I don’t think that counts as difficult. I had a project sponsor who said in our first meeting, “I want you to know that this project shouldn’t exist and I will oppose the final recommendations, whatever they are.” Strangely enough that wasn’t the most difficult because at least he was being honest!
I think the most difficult was a CEO who took away an 100% essential technical specialist from a project, and then fired me for not being able to hit the schedule without him!
We need to be able to turn those people around. Do you see people doing that?
Good project managers do this all the time. It is what separates good project managers from bad ones. It works because they persuade, motivate, sell, enthuse and even manipulate! They don’t hide behind Gantt charts and reports and RAG statuses.
They also take the time to truly understand their stakeholders and what motivates them. They listen and empathise and put themselves in the stakeholders’ shoes, rather than think about the project only from their own perspective.
Tick box technocratic project managers are useless in projects with powerful and difficult stakeholders.
External suppliers, including individual contractors, have their own commercial objectives. It is in their interest to have projects go through change controls that increase profits. It suits them if the projects take longer to increase revenue.
This conflict of interest is not always balanced out by the external suppliers making the right decisions for the client. Even if the team delivering the project want to do the right thing, the sales person might be directly financially motivated to prevent it.
If you as a project manager pretend otherwise, the suppliers can get the upper hand. For evidence, see almost every single Central Government IT contract ever!
The team has a huge part to play in all this and they can be equally difficult. What’s your top tip for project managers struggling with their project team?
The book has a chapter full of very specific advice for dealing with a demotivated team. Whether they are being difficult or in full-on mutiny, the key is understanding the team and looking for what is behind the behaviour.
Are they bored, stressed, not learning? Is it a personality clash? Different cultures? Only once you have a handle on this can you start to come up with solutions.
Three simple things always work for me:
- Be completely honest with them.
- Talk to them much more than you think you need to.
- Have some fun together.
Look out for my review of A Practical Guide to Dealing with Difficult Stakeholders later this year.
Read my interview with one of the other authors, Roger Joby here: The Reality of Difficult Stakeholders.
About my interviewee: Jake Holloway is an experienced project manager, consultant and Business Development Director in the areas of IT, Digital and Marketing. He has managed and sponsored hundreds of projects and portfolios, and has been involved in building and designing project management systems.
*This article contains affiliate links at no cost to you.
This is a guest article by Paul Pelletier,LL.B., PMP.
Neutrality isn’t an option
If you are neutral in situations of injustice, you have chosen the side of the oppressor. If an elephant has its foot on the tail of a mouse, and you say that you are neutral, the mouse will not appreciate your neutrality.
Desmond Tutu, Social Rights Activist and retired Archbishop, South Africa
This quote applies to bullies as well as it applies to elephants. Bullying can be as harmful in the workplace as it is in schools and other areas of society, causing the well understood personal emotional impacts plus a long list of challenges for project manager and their organizations where it is taking place.
Projects are subsets of workplaces and since project management is, for the most part, an activity that involves working very closely with others, the impact of a bully in a project is potentially lethal to project success.
To complicate matters, workplace bullies are often hard to identify clearly. Bullying is a tactic used by the perpetrator to get ahead in the workplace. The bullies are often highly skilled workers who are socially manipulative, targeting those who threaten their career path while adroitly charming those who serve it well. Thus, a senior manager or their supervisor may say, “That person seems great to me” or “She always gets results.” Remember, while good employers purge bullies, most promote them.
There is also an important connection between the “Neutrality isn’t an option” expression and ethics. The PMI Code of Ethics and Professional Conduct included as part of the standard for Responsibility the statement: “We report any illegal or unethical conduct.”
In other words, the Code says that neutrality isn’t an option.
Workplace bullying: a definition
The Workplace Bullying and Institute (WBI) defines workplace bullying as “repeated, health-harming mistreatment, verbal abuse, or conduct which is threatening, humiliating, intimidating, or sabotage that interferes with work, or some combination of the three”.
It is a laser-focused, systematic campaign of interpersonal destruction. It has nothing to do with work itself. It is driven by the bully’s personal agenda and actually prevents work from getting done and after all, that is precisely what project managers are responsible for doing – getting project work done through the efforts of others.
Workplace bullying includes behaviors that can be categorized into three types:
- aggressive communication
- manipulation of work, and
The abuse runs the gambit from insults or offensive remarks, to giving unmanageable workloads, to withholding pertinent information, to inappropriate email or social media, to stealing credit for work.
The impact of bullying on projects
There is a wide range of direct negative and financial impacts which bullying has on projects. The most obvious are impacts on project success, team performance, budgets and timelines.
Shane Cowishlaw, writing on Stuff.co.nz reports that workplace bullying costs New Zealand “hundreds of millions” of dollars. Australia reports losses in the billions. Not surprisingly for companies in the much larger United States, workplace bullying-related costs are estimated to be over $200 billion.
Coping strategies for dealing with workplace bullying
How does a project manager deal with an organizational culture of bullying in the workplace? This is a complex question that I have created an entire presentation and workshop on.
The best short answer is to appreciate what is within your realm of control and influence in order to create an action plan. For example, you may quickly observe that these cultural norms aren’t adopted by the whole organization but seem to have evolved in your unit. That may give you an opportunity of influence outside the unit.
Alternatively, you may see the senior management adopting a disrespectful tone and exhibiting poor leadership skills.
Before you decide what to do, here is a list of issues to consider:
- Take time to learn and observe those with influence (i.e. senior management, human resources staff, your manager).
- Do your investigative homework (i.e. What policies are in place related to workplace behaviour? What is the complaints process? Is it fair, safe? Have there been others who complained? If so, what was the result?)
- Consider what information would be needed create the most impactful and effective strategy to present a complaint? How would you obtain this evidence?
- Document every incident of unacceptable behavior in detail.
- Consider whether you have any colleagues willing to join forces with you – there is power and credibility in numbers.
- Realistically assess the risks and challenges you would face if you raised the flag. Be courageous but sensible.
With all the information in hand, create your action plan. Consider this a project. Be strategic, focused and patient. Plan – only move ahead when you are ready. Be prepared for conflict and challenges.
Always have a strategy to protect yourself, and your health. It is possible that the best strategy is to think about how to develop your organizational exit plan. You may not be able to change this toxic workplace but you can leave a message about why you left and move onto a harmonious workplace.
In our domain, Jon Katzenbach's definition of a team informs how we interact with our project members. A Team is defined as ...
A group of qualified individual who hold each other mutually accountable for a shared outcome - Katzenbach, Wisdom of Teams
It has been suggested that ...
The Estimate-Commitment relationship stands in opposition to collaboration. It works against collaboration. It supports conflict, not teamwork.
This position is counter to our Katzenbach based teaming processes. The conjecture that estimates work against collaboration, rather than for collaboration, removes the mutual accountability condition for team success.
This is like speaking with our builder about the bedroom remodel project and him saying...
Oh here's my estimate to complete your bedroom remodel, but I have no intention of meeting that estimate.
Where we work, Estimates provide clarity and understanding of the mutual accountability for the shared outcome between the group of qualified individuals.
Where we work, and apply Agile software development processes, we've adopted Seven Pillars of Program Success. We work hard, every day, to: †
- Have a well understood set of capabilities needed to define "Done" in units of measure meaningful to the decision makers. These are usually stated in terms of effectiveness and performance.
- Have a genuine integrated plan associated with measuring physical percent complete in terms of Quantifiable Backup Data (QBD) to inform our Estimate To Complete (ETC) and Estimate at Completion (EAC). This QBD is a perfect fit with Agile's working software, predefined with the needed capabilities. This is why, in our domain, Earned Value Management + Agile is a match made in heaven.
- Produce an independent estimate of the cost, schedule, and probability the needed capabilities will perform as planned. These estimates is truly independent and not part of a missionary movement where people are trying to sell the program or force it to fit within the available funds.
- Provide sufficient and stable funding.
- Establish a culture of asking for and listening to outside competencies.
- Assure a willingness to ask hard questions and the courage and energy to not quit until there are credible answers to the questions.
- Recognize that it takes requirements (the implementation of the capabilities), resources, business processes, and everyone working together to increase the probability of success.
Your domain of course will be different. You or your team may not work on projects must succeed on our before the needed date, at or below the needed budget with the needed capabilities. That is, you can show up late, over budget, and with missing capabilities and the customer will consider that OK. And just to be clear, the notion of the value of incremental delivery is defined by the receiver of those capabilities, not the producer. Ask the customer if the partial outcomes can actually be put to productive use in the business environment. Capabilities Based Planning defines which capabilities are needed in what order to provide business value.
We show up late, over budget, and with missing capabilities many times of course - so no need to point that out - without corrective actions attached. Any number of reports, including bogus reports show this. But a critical understanding is we know we're going to be late, and we know we're going to be over budget, and most of the time we know the delivered capabilities will not meet the intended specifications every reporting period and have a plan (maybe not the right plan) to fix it.
Risk Management is How Adults Manage Projects - Tim Lister
In our domain, being late, over budget, and less than required capabilities it is never acceptable to the customer. Are we late, over budget, and have performance issues? Of course. It's called development. But we know it, have visibility to the root causes, and have corrective action plans. This visibility is part of the process. Without a steering target and actuals, no error signal can be generated to be used for course correction. One of our PMs was a Navy navigator on an air craft carrier. The commanded heading was required for him to carry out is navigation processes. Without estimates of the impediments to be encountered along the way for the course to the desired destination, the productivity of progress, with effort to make progress along the course there is no way to know which path to take to that destination. By the way, measuring past performance and projecting that as future performance only works if the future conditions are like the past conditions. This is rarely the case on any sufficiently complex project.
Yogi Berra reminds us — If you don't know where you are going, you'll end up someplace else.
This poor performance is actually reported in a database for review every reporting period (minimal monthly) and used to adjust award fees and assessment for the next job that significantly impacts the selection process. This is called Closed Loop Control.
When there are no Estimates to Complete (ETC) or Estimates At Completion (EAC) there is an Open Loop Control condition and the corrective actions needed (but not always effective) have no steering target with variance to steer toward to move the project back to GREEN.
So estimates don't stand in the way of cooperation, they are the foundation of mutual accountability for the shared outcome based on cooperation.
† These seven pillars are derived from VADM Joseph Wendell Dyer, USN (Retired), Navy's chief test pilot, F/A-18E/F Program Manager, and Commander, Naval Air Systems Command, plus ten years as an executive at iRobot Corporation. Many of our projects are not VADM Dyer's but they are still mission critical, manifestly important to the success of our customers business success. If they were to fail - cost too much, show up beyond the business need date, or not provide the needed capabilities, the success of the business is in jeopardy. Again you're domain my be significantly different. Use as appropriate.
Practical People Engagement* is one of the few books on my shelf printed in colour throughout. That’s the quality that Patrick Mayfield brings to his work. As one of the authors behind the AXELOS Managing Successful Programmes manual and a national authority on change management, I’m delighted that he has offered me a copy of his latest book to give away.
Get in touch using the contact form or leave a comment below with the phrase ‘I engage people’ and I’ll enter you into the draw.
The giveaway closes at 5pm UK time on Friday 4 September 2015.
In the meantime, you can read this article by Patrick about why Agile project management is here to stay.
Read the terms and conditions of the giveaway here (if you like that kind of thing – they basically say we choose a winner at random and my decision is final).
*That’s an affiliate link, in case you were wondering. Affiliate links help keep this blog running so thank you!
In software development, we almost always encounter situations where a decision must be made when we are uncertain what the outcome might or even the uncertainty in data used to make that decision.
Decision making in the presence of uncertainty is standard management practice in all business and technical domains. From business investment decision, to technical choices for project work.
There are many techniques for decision making. Decision trees are common. Where the probability of an outcome of a decision is part of a branch of a tree. If I go left in the branch - the decision - what happens? If I go right what happens? Each branch point becomes the decision. Each of the two or more branches becomes the outcomes. The probabilistic aspect is applied to the branches, and the outcomes - which may be probabilistic as well and are assessed for befits to those making the decision.
Another approach is Monte Carlo Simulation of decision trees. Here's a tool we use for many decisions in our domain, Palisade, Crystal Ball.There are others. They work like the manual process in the first picture, but let you tune the probabilistic branching, probabilistic outcomes to model complex decision making processes.
In the project management paradigm of projects we work, there are networks of activities. Each of these activities has some dependency or prior work, and each activity produces dependencies on follow on work. These can be model with Monte Carlo Simulation as well.
The Schedule Risk Analysis (SRA) of the network of work activities is mandated on a monthly basis in many of the programs we work.
In Kanban and Scrum systems Monte Carlo Simulation is a powerful tool to reveal the expected performance of the development activity. Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects Using Monte Carlo Simulation, Troy Magennisis a good place to start for this approach.
Each of these approaches and others are designed to provide actionable information to the decision makers. This information requires a minimum understanding of what is happening to the system being managed:
- What are the naturally occurring variances of the work activities that we have no control over - aleatory uncertainty?
- What are the event based probabilities of some occurrence - epistemic uncertainty?
- What are the consequences of each outcome - decision, probabilistic event, or naturally occurring variance - on the desired behavior of the system?
- What choices can be made that will influence these outcomes?
In many cases, the information available to make these choices is in the future. Some is in the past. But that information in the past needs careful assessment.
Past data is Only useful if you can be assured the future is like the past. If not, making decision using past data without adjusting that data for the possible changes in the future takes you straight into the ditch - see The Flaw of Averages.
In order to have any credible assessment of the impact of a decision using data in the future - where will the system be going in the future? - it is mandatory to ESTIMATE.
It is simply not possible to make decisions about future outcomes in the presence of uncertainty in that future without making estimates.
Anyone says you can is incorrect. And if they insist it can be done, ask for testable evidence of their conjecture, based on the mathematics of probabilistic systems. No testable credible testable data, then it's pure speculation. Move on.
The False Conjecture of Deciding in Presence of Uncertainty without Estimates
- Slicing the work into similar sized chunks, performing work on those chunks and using that information to produce information about the future makes the huge assumption the future is like the past.
- Record past performance, making nice plots, running static analysis for mean, mode, standard deviation, variance is naive at best. The time series variances are rolled up hiding the latent variances that will emerge in the future. Time series analysis (ARIMA) is required to reveal the possible values in the dataset from the past that will emerge in the future, since the system under observation remains the same.
Time series analysis is a fundamental tool for making forecasting of future outcomes from past data. Weather forecasting - plus complex compressible fluid flow models - is based on time series analysis. Stock market forecasting uses time series analysis. Cost and Schedule modeling uses time series analysis. Adaptive process control algorithms, like the speed control and fuel management in your modern car uses time series analysis.
One of the originators of time series analysis, George E. P. Box and his seminal book Time Series Analysis, Forecasting and Control, is often seriously misquoted, when he said All Models are Wrong, Some are Useful. Anyone misusing that quote to try and convince you, you can't model the future didn't (or can't) do the math in Box's book and likely got a D in the High School probability and statistics class.
So do the math, read the proper books, gather past data, model the future with dependency networks, Kanban and Scrum backlogs, measure current production, forecast future production based on Monte Carlo Models - and don't believe for a moment that you can make decision about future outcomes in the presence of uncertainties without estimating that future.
I sobbed quietly on the train reading A Swift Pure Cry* last week. At least at that time in the morning when I’m commuting to work everyone around me was asleep so I think I got away with it. My youngest son is 18 months old – when am I going to be able to read sad books about babies without breaking down? It’s expertly written but I think I would have avoided it if I had known what it was about before I started.
I chose it in a hurry during my lunch break because something awesome happened this month: I found a library. An actual library, just around the corner from the sushi shop.
This is huge, because normally I only read novels that people have lent me. I enjoy them, for the most part, but they are not my choice. The last time I went into a book shop to choose something myself I was so underwhelmed by the chart list that I left with nothing. The selection was awful. But a library… That’s totally different.
So far I have made frangipane swirls, chocolate chip cookies and man-in-the-moon scones. The children have contributed precisely nothing to this effort. I can see I still have work to do there.
For my project management reading I read Leadership Toolbox for Project Managers by Michel Dion on Kindle.
It’s very quotable. I particularly liked: “Don’t be afraid to be a leader. Your team needs one.” You do have to wait until Chapter 4 before there is anything particularly about project leadership as the early part of the book is all about setting the scene and getting the foundations right.
Then I read A Practical Guide to Dealing with Difficult Stakeholders. It’s short, so it was done in a few train journeys and I liked all the case studies and stories. I ended up feeling a bit sorry for the authors, having had to work with so many difficult stakeholders in their careers. You can read my interview with co-author Roger Joby here.
The boys have been particularly taken with Henry’s Holiday which we seem to be reading every bedtime. It’s a cute story but some of the turns of phrase are a bit clichéd. The overall message of the book seems to be ‘there’s no place like home’ but the illustrations are lovely.
I’m a bit short of reading for next month, and I’ve got a week off so hopefully I will have reading time. What do you recommend?
*This article contains affiliate links at no cost to you.
A Tweet caught my eye this weekend
Agile development is a phrase used in software development to describe methodologies for incremental software development. Agile development is an alternative to traditional project management where emphasis is placed on empowering people to collaborate and make team decisions in addition to continuous planning, continuous testing and continuous integration.
Next the notion that Agile is actually risk management is very misunderstood. Agile provides raw information for risk management, but risk management has little to do with what software development method is being used. The continuous nature of Agile provides more frequent feedback on the state of the project. That is advantageous to risk management. Since Agile mandates this feedback on fine grained boundaries - weeks not months - the actions in the risk management paradigm below are also fine grained.
Where Does Risk Come From?
All risk comes from uncertainty. Uncertainty comes in two types. (1) Aleatory (naturally occurring in the underlying process and therefore irreducible) and (2) Epistemic (a probability that something unfavorable will happen).
Risk results from uncertainty. To deal with the risk from Aleatory uncertainty we can only have margin, since the resulting risk is irreducible. This is schedule margin, cost margin, and product performance margin. This type of risk is just part of the world we live in. Natural variances in the work performed developing products needs margin. Natural variances in the performance is a server's throughput needed margin.
We can deal directly with the risk from Epistemic uncertainty by buying down the uncertainty. This is done with experiments, trials, incremental development and other risk reduction activities that lower the uncertainty in the processes.
By the way many use the notion that risk is both positive and negative. This is not true. It's a naive understanding of the mathematics of risk processes. PMI does this. It is not allowed in our domain(s).
Agile and Incremental Delivery
There is a popular myth in the agile community that they have a lock on the notion of incremental delivery. This is again not true. Many product development lifecycles use incremental and iterative processes to produce products. Spiral development, Integrated Master Plan/Integrated Master Schedule, Incremental Commitment. All applicable to Software Intensive Systems and System of Systems domains, like Enterprise ERP.
Managing in the Presence of Uncertainty and the Resulting Risk
Without that connection it just ain't true.
- DOD Risk Management Guide V7
- Software Engineering Risk Management
- Risk Happens
- Making Hard Decisions
- Effective Opportunity Management for Projects
- Effective Risk Management: Some Keys to Success
- Technical Risk Management
- Project Risk Management: Process, techniques and Insights
- Managing Risk: Methods for Software Systems Development
- SEI: Continuous Risk Management
And a short white paper on Risk Management in Enterprise IT
In the world of project management and the process improvement efforts needed to increase the Probability of Project Success anecdotes appear to prevail when it comes to suggesting alternatives to observed dysfunction.
If we were to pile all the statistics for all the data for the effectiveness or not effectiveness of all the process improvement methods on top of each other they would lack the persuasive power of a single anecdote in most software development domains outside of Software Intensive Systems.
Why? because most people working in small groups, agile, development projects, compared to Enterprise, Mission Critical can't fail, that must show up on time, on budget, with not just the minimum viable products, the the mandatorily needed viable capability - rely on anecdotes to communicate their messages.
I say this not from just personal experience, but from research for government agencies and commercial enterprise firms tasked with Root Cause Analysis, conference proceedings, refereed journal papers, and guidance from those tasked with the corrective actions of major program failures.
Anecdotes appeal to emotion. Statistics, numbers, verifiable facts appeal to reason. It's not a fair fight. Emotiona always wins without acknowledging that emotion is seriously flawed when making decisions.
Anecdotal evidence is evidence where small numbers of anecdotes are presented. There is a large chance - statistically - this evidence is unreliable due to cherry picking or self selection (this is the core issue with the Standish Reports or anyone claiming anything without proper statistical sampling processes).
Anecdotal evidence is considered dubious support of any generalized claim. Anecdotal evidence is no more than a type description (i.e., short narrative), and is often confused in discussions with its weight, or other considerations, as to the purpose(s) for which it is used.
We've all heard stories, ½ of all IT projects fail. Waterfall is evil, hell even estimates are evil stop doing them cold turkey. They prove the point the speaker is making right? Actually they don't. I just used an anecdote to prove a point.
If I said The Garfunkel Institute just released a study showing 68% of all software development projects did not succeed because of a requirements gathering process failed to define what capabilities were needed when done, I've have made a fact base point. And you'd become bored reading the 86 pages of statistical analysis and correlation charts between all the causal factors contributing to the success or failure of the sample space of projects. See you are bored.
Instead if I said every project I've worked on went over budget and was behind schedule because we were very poor at making estimates. That'd be more appealing to your emotions, since it is a message you can relate to personally - having likely experienced many of the same failures.
The purveyors of anecdotal evidence to support a position make use of a common approach. Willfully ignoring a fact based methodology through a simple tactic...
We all know what Mark Twain said about lies, dammed lies, and statistics
People can certainly lie with statistics, done all the time. Start with How to Lie With Statistics But those types of Lies are nothing compared to the able to script personal anecdotes to support a message. From I never seen that work, to what now you're telling me - the person that actually invented this earth shattering way of writing software - that it doesn't work outside my personal sphere of experience?
An anecdote is a statistic with a sample size of one. OK, maybe a sample size of a small group of your closest friends and fellow travelers.
We fall for this all the time. It's easier to accept an anecdote describing a problem and possible solution from someone we have shared experiences with, than to investigate the literature, do the math, even do the homework needed to determine the principles, practices, and processes needed for corrective action.
Don’t fall for manipulative opinion-shapers who use story-telling as a substitute for facts. When we're trying to persuade, use facts, use actual example based on those facts. Use data that can be tested outside personal anecdotes used to support an unsubstantiated claim without suggesting both the rot cause and the testable corrective actions.
It is conjectured that uncertainty can be dealt with ordinary means with open conversation, identification of the uncertainties and their handling strategies. That quantitative methods are too elaborate and unnecessary for problems except the most technical and complicated ones.
When asked what is meant by uncertainty the answer many times is probably or very likely. But not any quantitative measure meaningful to the decision makers. Since the future is always uncertain in our project domain, making decisions in the presence of uncertainty is a critical success factor  for all project work.
Decision making is one of the hard things in life. True decision-making occurs not when we already know the outcome, but when we do not know what to do. When we have to balance conflicting values, costs, schedule, needed capabilities, sort through complex situations, and deal with real uncertainty. To make decisions in the presence of this uncertainty we need to know the possible outcomes of our decision, the possible alternatives and their costs - in the short term and in the long term. Making these types of decisions requires we make estimates of all the variables involved in the decision-making process.
What Are Probabilities?
There is a trend in the software development domain to redefine well established terms in mathematics, engineering, and science - it seems to suit the needs of those proffering that in the presence of uncertainty decisions can't be made.
Probabilities represent our state of knowledge. They are a statement of how likely we think an event might occur or the possible of a value being within a range of values.
These probabilities are based in uncertainty, and uncertainty comes in two forms. Aleatory and Epistemic.
- Aleatory uncertainty is the natural randomness in a process. For discrete variables, the randomness is parameterized by the probability of each possible value. For continuous variables, the randomness is parameterized by the probability density function (pdf).
- Epistemic uncertainty is the uncertainty in the model of the process. It is due to limited data and knowledge. The epistemic uncertainty is characterized by alternative models. For discrete random variables, the epistemic uncertainty is modeled by alternative probability distributions. For continuous random variables, the epistemic uncertainty is modeled by alternative probability density functions. In addition, there is epistemic uncertainty in parameters that are not random by have only a single correct (but unknown) value.
Both these uncertainties exist on projects. When making good decisions on projects we know something about these uncertainties and have handling plans for the resulting risk produced by the uncertainties.
- For Aleatory uncertainty (irreducible risk) we need margin. The margin protects the project deliverables from unfavorable cost, schedule, and technical performance that is part of the naturally occurring variances.
- For Epistemic uncertainty (reducible risk) can be addressed by buying down the uncertainty. Paying money to learn more.
This by the way is a primary benefit of Agile Software Development, where forced short term deliverables provide information to reduce risk. Agile is Not a risk management process, many other steps needed for that. But Agile is a means to reveal risk and take corrective action on much shorter time boundaries - reducing the accumulation of risk.
Some Background on Decision Making in the Presence of Uncertainty
One way to distinguish good decisions from bad decisions is to assess the outcomes of those decisions. The measurement critical for a good or bad decision needs some definition itself. There are issues of course. The results of the decision may not appear for some time in the future, but we need to know something about the possible results before we make the decision. As well we'd like to see the results of the alternatives of our decision for the choices that weren't made or rejected.
A fundamental purpose of quantitative decision making is to distinguish between good and bad decisions. And to provide criteria for assessing the goodness of the decision. To do this we need first to establish what the decision is about.
- When do you think we'll be ready to go live with the needed capabilities we're paying you develop?
- If we switch from our legacy systems to an ERP system, how much will we save over the next 5 years with the sunk cost of the entire project?
- On the list of desirable features, which ones can we get on the current need date if we reduce the budget by 15%?
Making decisions like these in the presence of uncertainty by estimating future outcomes is a normal, everyday, business process. Any suggestion these decisions can be made without estimates is utter nonsense.
Decision analysis starts with defining what a decision is - the commitment to resources that is irrevocable only at some cost. If there is not cost associated with making the decsion or changing your mind after the decision has been made - in the business domain - the decision was of little if any value. This is the value at risk discussion. How much are we willing to risk if we don't know to some level of confidence what the outcome of our decision is?
The elements of good decision analysis are . So for any good decision and its decision making process, we'll need answers to the questions on the left, some form of logic to make a decision, the defined actionable steps from that decision and then an assessment of the outcomes to inform future decisions - learning from our decisions
Decision support systems that implement the process above are based in part on the underlying uncertainties of the systems under management. Research into the cost and schedule behaviors of these systems is well developed. Here's one example.
In the end the decision making process will not meet the needs of the decision makes if we don't have alternatives defined, information at hand - and most times this information is probabilities information from condition in the future in the presence of uncertainty, and the value we assign to the outcomes - then making decisions is going to turn out BAD.
We're driving in the dark with the lights off, while spending other peoples money and our project will end up like this...
Reference Material for Further Understanding
- Strategic Planning with Critical Success Factors and Future Scenarios: An Integrated Strategic Planning Framework, Technical Report CMU/SEI-2010-TR-037 ESC-TR-2010-102.
- Decision Analysis for the Professional 4th Edition, Peter McNamee and John Celina,
Real Options: Managing Strategic Investment in an Uncertain World, Amran, Martha, and Nalin Kulatilaka,Harvard Business School Press, 1999.
- Making Hard Decisions: An Introduction to Decision Analysis, Robert Clemen,
Duxbury Press, 1996.
- Software Design as an Investment Activity: A Real Options Perspective, Kevin Sullivan and Prasad Chalasani
- Probabilistic Modeling as an Exploratory Decision Making Tool, Martin Pergler and Andrew Freeman, McKinsey & Company, Number 6, September 2008
- Value at Risk for IS/IT Project and Portfolio Appraisal and Risk Management, Stefan Koch, Department of Information Business, Vienna University of Economics and BA, Austria,
I was quite excited to get Advances in Project Management* as I like to keep up-to-date with what other people think is important in our field. So I was disappointed to see that it wasn’t truly new material. It’s a compilation of extracts and summaries from the Advances in Project Management series from Gower, excluding anything from my book (boo!).
Sour grapes at being left out aside, I have a lot of time for series editor Darren Dalcher and am always a bit star struck when I meet him. He has edited this book which claims to be “an accessible introduction to further topics.” That is certainly true. I found Second Order Project Management by Michael Cavanagh virtually impenetrable but his chapter in this book made it very clear. He quotes a survey respondent as saying:
Our priorities are cost, schedule, resources – when really we should be thinking first about relationships, infrastructure and ways of working.
A bit light on the ‘advances’
If you keep up-to-date with trending topics in project management, you might find like me that this book is a bit light on the ‘advances’. David Cleden, for example, presented his thinking backwards model of dealing with uncertainty years ago at a National Centre for Project Management event. It’s a good technique, and it will be a new concept to many readers, but the book could have been more cutting edge.
Another example: who talks about managing stakeholders anymore? We engage them, and we’ve been engaging them in the current literature for a while now, so I feel the book’s authors should have picked up on that too.
Short chapters, teaser topics
The chapters seem short, leaving you wanting more detail, another case study or perhaps just a practical example of how you can use this technique in your own work.
I suppose that is the point: it is designed as an introduction to the body of work that is the Advances series and it did introduce me to new topics like spirituality in project teams. I haven’t read Julia Neal and Alan Harpham’s book on the subject.
In their chapter, they talk about how spirituality is not the same as religion but personally I found it hard to split the two and couldn’t see how ‘spirituality’, in the way they used it, was any different from ‘project team culture’. Food for thought.
Focusing on ambiguity
The book has a strong focus on risk, uncertainty and ambiguity, as dealing with all of those are definitely topics for the modern, ‘advanced’ project manager.
It would be wrong to say that there is nothing new or innovative in here. There is an interesting bit on earned schedule and the limitations of traditional EV and SPI calculations. It’s a long chapter compared to the others and hard to summarise here but it gets over the issue of SPI=1 even when a project finishes late.
The future of project management
The chapter by Michael Hatfield on the coming of a sea change in project management science is the best in the book. He knocks bloggers, by the way, for contributing suspect ideas about how management is supposed to function. Let’s gloss over that.
I’m not sure that we’ll see a sea change – a metamorphosis – but more a gradual evolution of project management practice. What I liked about his chapter was the fresh thinking. The project management theorists and pundits, he writes, can spout whatever ideas they like about how management should work. But the free marketplace will continue to put into place those ideas that work and ignore those that do not have a positive impact on the accounts.
I laughed out loud when I read:
I fully anticipate that one of the earliest casualties of this process will be much of what passes for modern risk management theory, being the waste of time that it is.
Michael Hatfield and David Hillson could be my new fantasy dinner party guests.
Michael predicts that businesses and practitioners will soon move away from a lot of the prescribed, textbook theory. He advocates for an environment where we pick and choose the pieces that work in real life and dump the rest.
You would struggle to implement any of the ideas in here from the short explanations. You’d struggle to use it as an academic reference – you’d be better off using the authors’ books for that. But if you are trying to select interesting ideas for your own future research or events, identify trends, or you just want to broaden your outlook beyond the PMBOK Guide–Fifth Edition, then this is just the job.
And if you want to read what the editor missed out, take a look at my book, 1409443124 about engaging stakeholders throughout the lifecycle of the project and rethinking the traditional approach to lessons learned.
* This article contains affiliate links at no cost to you.
When we hear about software development in the absence of a domain, it's difficult to have a discussion about the appropriate principles, processes, and practices of that work. Here's one paradigm that has served us well.
Steve McConnell's recent post on estimating prompted me to make one more post on this topic. First some background on my domain and point of view.
I work in what is referred to a Software Intensive Systems (SIS) involving Introduction, Foundations, Development Lifecycle, Requirements, Analysis and Design, Implementation, Verification and Validation, Summary and Outlook and those SIS's are usually embedded in System of Systems
This may not be the domain where the No Estimates advocates work. Their system may not be software intensive and more that not system of systems. And as one of the more vocal supporters of No Estimates likes to say the color of your sky is different than mine. And yes it is, it's Blue, and we know why. It's Rayleigh Scattering. The reason we know why is engineers and scientist occupy the hallways of our office, along with all the IT and business SW developers running the enterprise IT systems that enable the production of all the SIS's embedded in the SoS products.
Here's a familiar framework for the spectrum of software systems
In project management we do not seek perfect prediction. We seek early warning signals to enable predictive corrective actions.
1. Estimation is often done badly and ineffectively and in an overly time-consuming way.
My company and I have taught upwards of 10,000 software professionals better estimation practices, and believe me, we have seen every imaginable horror story of estimation done poorly. There is no question that “estimation is often done badly” is a true observation of the state of the practice.
The role of estimating is found on many domains. Independent Cost Estimates (ICE) are mandated in many domains I work. Estimating professional organizations provide guidance, materials, and communities. www.iceaa.org, www.aace.org, NASA, DOD, DOE, DHS, DOJ, most every "heavy industry" from dirt moving to writing software for money, has some formalized estimating process.
2. The root cause of poor estimation is usually lack of estimation skills.
Estimation done poorly is most often due to lack of estimation skills. Smart people using common sense is not sufficient to estimate software projects. Reading two page blog articles on the internet is not going to teach anyone how to estimate very well. Good estimation is not that hard, once you’ve developed the skill, but it isn’t intuitive or obvious, and it requires focused self-education or training.
One of the most common estimation problems is people engaging with so-called estimates that are not really Estimates, but that are really Business Targets or requests for Commitments. You can read more about that in my estimation book or watch my short video on Estimates, Targets, and Commitments.
Root Cause Analysis is one of our formal processes. We apply Reality Charting® to all technologies of our work. RCA is part of Governance and continuous process improvement. Conjecturing that estimates are somehow the "smell" of something else without stating that problem and most importantly confirming the Root Cause of the problem, providing corrective actions, and most critically confirming the corrective action removes the root cause is bad management at best and naive management at worst
3. Many comments in support of #NoEstimates demonstrate a lack of basic software estimation knowledge.
I don’t expect most #NoEstimates advocates to agree with this thesis, but as someone who does know a lot about estimation I think it’s clear on its face. Here are some examples
(a) Are estimation and forecasting the same thing? As far as software estimation is concerned, yes they are. (Just do a Google or Bing search of “definition of forecast”.) Estimation, forecasting, prediction--it's all the same basic activity, as far as software estimation is concerned.
The notion of redefining terms to suite the needs of the speaker is troubling. Estimating is about the past, present, and future. As a former physicist I made estimates of the scattering cross section of particle collisions, so we knew where to look for the signature of the collision. In a second career, since I really didn't have an original idea needed for the profession of particle physics, I estimated the signature parameter is mono-pulse doppler radar signals to identify targets in missile defense systems. Same for signatures from sonar system to separate whales from Biscayne Bay speed boats, the Oscar Class Russian Submarines.
Forecasting is estimating some outcome in the future. Weather forecasters make predictions of the probability of rain in the coming days.
(b) Is showing someone several pictures of kitchen remodels that have been completed for $30,000 and implying that the next kitchen remodel can be completed for $30,000 estimation? Yes, it is. That’s an implementation of a technique called Reference Class Forecasting.
Reference Class Forecasting is fundamental to good estimating. But other techniques are useful as well. Parametric modeling, design based models in systems engineering in sysML has estimating databases. Model Based Design is a well developed discipline in our domain and others. Evene Subject Matter Experts (although actually undesirable) can be a start, with wide-band Delphi
(c) Is doing a few iterations, calculating team velocity, and then using that empirical velocity data to project a completion date count as estimation? Yes it does. Not only is it estimation, it is a really effective form of estimation. I’ve heard people argue that because velocity is empirically based, it isn’t estimation. Good estimation is empirically based, so that argument exposes a lack of basic understanding of the nature of estimation.
All good estimates are based on some "reference class." Gathering data to build a reference class may be needed. But care is needed is using the "first few sprints" without first answering some questions.
- Is a forecast of the future.
- Is the future like the past?
- Are there changes in the underlying statistical process in the future that are not accounted for in the past.
- Are the underlying statistical processes for irreducible (aleatory) uncertainty stationary. That is are the natural variances in the project work the same across the life span of the project. Or do they change as time passes?Empirical estimation require knowing something about the underlying statistical and probabilistic processes. Without this knowledge, those empirical measurement are "point" measures and not likely to be representative of the future
(d) Is counting the number of stories completed in each sprint rather than story points, calculating the average number of stories completed each sprint, and using that for sprint planning, estimation? Yes, for the same reasons listed in point (c).
This is estimating. But the numbers alone are no good "estimators." The variance and stability of the variance is needed. The past is a predictor of the future ONLY is the future is like the past. This is the role of time series analysis where simple and free tools can be used to produce a credible estimate of the future from the past.
(e) Most of the #NoEstimates approaches that have been proposed, including (c) and (d) above, are approaches that were defined in my book Software Estimation: Demystifying the Black Art, published in 2006. The fact that people people are claiming these long-ago-published techniques as "new" under the umbrella of #NoEstimates is another reason I say many of the #NoEstimates comments demonstrate a lack of basic software estimation knowledge.
The use of Slicing - one proposed #Noestimates technique is estimating. Using the NO in front of Estimate and then referencing "slicing" seems a bit disingenuous. But slicing is subject to the same issue as all reference class that are not adjusted for future changes. The past may not be like the future. Confirmation and adjustment are part of good estimating.
(f) Is estimation time consuming and a waste of time? One of the most common symptoms of lack of estimation skill is spending too much time on ineffective activities. This work is often well-intentioned, but it’s common to see well-intentioned people doing more work than they need to get worse estimates than they could be getting.
This notion that those spending the money get to say what is a waste and what is waste would be considered hubris in any other context. In an attempt not to be rude (one of the No estimates advocates favorite come backs when presented with a tough question - ala Jar Jar Binks) estimates are primarily not for those spending the money but for those providing the money. How much, when and what are business questions that need answers in any non-trivial business transaction. If the need to know is not there, it is likely the "value at risk" for the work is low enough, no one cares what it costs, when it will be done, or what we'll get when we're done.
Just to be Crystal clear I use the term non-trivial to mean a project whose cost and schedule and possible whose produced content - when missed - do not impact the business in any manner detrimental to its operation.
(g) Is it possible to get good estimates? Absolutely. We have worked with multiple companies that have gotten to the point where they are delivering 90%+ of their projects on time, on budget, with intended functionality.
Of course it is, and good estimates happen all the time. Bad estimates happen all the time as well. One of my engagements is with the Performance Assessment and Root Cause Analyses division of the US DOD. Root Cause Analysis of ACAT1 Nunn McCurdy programs shows the following. Similar Root Causes can be found for commercial projects.
One reason many people find estimation discussions (aka negotiations) challenging is that they don't really believe the estimates they came up with themselves. Once you develop the skill needed to estimate well -- as well as getting clear about whether the business is really talking about an estimate, a target, or a commitment -- estimation discussions become more collaborative and easier.
The Basis of Estimate problem is universal. Is was on a proposal team that lost to an arch rival because our "basis of estimate" included an unrealistic staffing plan. Build a credible estimate is actual work. The size of the project, the "value at risk," the tolerance for risk, and a myriad of other factors all go into the deciding how to make the estimate. All good estimates and estimating practices are full collaboration.
When management abuse is called out when estimating, it has not been explained how NOT estimating that corrects the management abuse.
4. Being able to estimate effectively is a skill that any true software professional needs to develop, even if they don’t need it on every project.
“Estimation often doesn't work very well, therefore software professionals should not develop estimation skill” – this is a common line of reasoning in #NoEstimates. This argument doesn't make any more sense than the argument, "Scrum often doesn't work very well, therefore software professionals should not try to use Scrum." The right response in both cases is, "Get better at the practice," not "Throw out the practice altogether."
The notion of "I can't learn to estimate well," is not the same as "it's possible to learn to estimate well." There are professional estimating organizations, books, journals, courses. What is really being said is "I don't want to learn to estimate."
#NoEstimates advocates say they're just exploring the contexts in which a person or team might be able to do a project without estimating. That exploration is fine, but until someone can show that the vast majority of projects do not need estimates at all, deciding to not estimate and not develop estimations skills is premature. And my experience tells me that when all the dust settles, the cases in which no estimates are needed will be the exception rather than the rule. Thus software professionals will benefit -- and their organizations will benefit -- from developing skill at estimation.
Those #NoEstimate advocates have appeared to no ask those paying their salary what they need in terms of estimates. Ignore for the moment the Dilbert managers. This is a day one issue. #Noestimates willfully ignores the needs of the business. And when called on it says "if management needs estimates, we should estimate." Any manager accountable for a non-trivial expenditure that doesn't have some type of "estimate to complete and Estimate at Completion isn't going to be a manager for very long when the project shows up late, over budget, and doesn't deliver the needed capabilities.
I would go further and say that a true software professional should develop estimation skill so that you can estimate competently on the numerous projects that require estimation. I don't make these claims about software professionalism lightly. I spent four years as chair of the IEEE committee that oversees software professionalism issues for the IEEE, including overseeing the Software Engineering Body of Knowledge, university accreditation standards, professional certification programs, and coordination with state licensing bodies. I spent another four years as vice-chair of that committee. I also wrote a book on the topic, so if you're interested in going into detail on software professionalism, you can check out my book, Professional Software Development. Or you can check out a much briefer, more specific explanation in my company's white paper about our Professional Development Ladder.
5. Estimates serve numerous legitimate, important business purposes.
Estimates are used by businesses in numerous ways, including:
- Allocating budgets to projects (i.e., estimating the effort and budget of each project)
- Making cost/benefit decisions at the project/product level, which is based on cost (software estimate) and benefit (defined feature set)
- Deciding which projects get funded and which do not, which is often based on cost/benefit
- Deciding which projects get funded this year vs. next year, which is often based on estimates of which projects will finish this year
- Deciding which projects will be funded from CapEx budget and which will be funded from OpEx budget, which is based on estimates of total project effort, i.e., budget
- Allocating staff to specific projects, i.e., estimates of how many total staff will be needed on each project
- Allocating staff within a project to different component teams or feature teams, which is based on estimates of scope of each component or feature area
- Allocating staff to non-project work streams (e.g., budget for a product support group, which is based on estimates for the amount of support work needed)
- Making commitments to internal business partners (based on projects’ estimated availability dates)
- Making commitments to the marketplace (based on estimated release dates)
- Forecasting financials (based on when software capabilities will be completed and revenue or savings can be booked against them)
- Tracking project progress (comparing actual progress to planned (estimated) progress)
- Planning when staff will be available to start the next project (by estimating when staff will finish working on the current project)
- Prioritizing specific features on a cost/benefit basis (where cost is an estimate of development effort)
These are just a subset of the many legitimate reasons that businesses request estimates from their software teams. I would be very interested to hear how #NoEstimates advocates suggest that a business would operate if you remove estimates for each of these purposes.
The #NoEstimates response to these business needs is typically of the form, “Estimates are inaccurate and therefore not useful for these purposes” rather than, “The business doesn’t need estimates for these purposes.”
That argument really just says that businesses are currently operating on the basis of much worse predictions than they should be, and probably making poorer decisions as a result, because the software staff are not providing very good estimates. If software staff provided more accurate estimates, the business would make better decisions in each of these areas, which would make the business stronger.
The other #NoEstimates response is that "Estimates are always waste." I don't agree with that. By that line of reasoning, daily stand ups are waste. Sprint planning is waste. Retrospectives are waste. Testing is waste. Everything but code-writing itself is waste. I realize there are Lean purists who hold those views, but I don't buy any of that.
Estimates, done well, support business decision making, including the decision not to do a project at all. Taking the #NoEstimates philosophy to its logical conclusion, if #NoEstimates eliminates waste, then #NoProjectAtAll eliminates even more waste. In most cases, the business will need an estimate to decide not to do the project at all.
In my experience businesses usually value predictability, and in many cases, they value predictability more than they value agility. Do businesses always need predictability? No, there are few absolutes in software. Do businesses usually need predictability? In my experience, yes, and they need it often enough that doing it well makes a positive contribution to the business. Responding to change is also usually needed, and doing it well also makes a positive contribution to the business. This whole topic is a case where both predictability and agility work better than either/or. Competency in estimation should be part of the definition of a true software professional, as should skill in Scrum and other agile practices.
Estimates are the basis of managerial finance and decision making in the presence of uncertainty (Microeconomics of software development). The accuracy and precision of the estimates is usually determined by the value at risk. From low risk, which may mean no estimates. To high risk which means frequently updated independent validation of the estimates. But in nearly all business decisions - unless the value at risk can be written off - there is a need to know something about the potential loss as well as the potential gain.
6. Part of being an effective estimator is understanding that different estimation techniques should be used for different kinds of estimates.
One thread that runs throughout the #NoEstimates discussions is lack of clarity about whether we’re estimating before the project starts, very early in the project, or after the project is underway. The conversation is also unclear about whether the estimates are project-level estimates, task-level estimates, sprint-level estimates, or some combination. Some of the comments imply ineffective attempts to combine kinds of estimates—the most common confusion I’ve read is trying to use task-level estimates to estimate a whole project, which is another example of lack of software estimation skill.
You can see a summary of estimation techniques and their areas of applicability here. This quick reference sheet assumes familiarity with concepts and techniques from my estimation book and is not intended to be intuitive on its own. But just looking at the categories you can see that different techniques apply for estimating size, effort, schedule, and features. Different techniques apply for small, medium, and large projects. Different techniques apply at different points in the software lifecycle, and different techniques apply to Agile (iterative) vs. Sequential projects. Effective estimation requires that the right kind of technique be applied to each different kind of estimate.
Learning these techniques is not hard, but it isn't intuitive. Learning when to use each technique, as well as learning each technique, requires some professional skills development.
When we separate the kinds of estimates we can see parts of projects where estimates are not needed. One of the advantages of Scrum is that it eliminates the need to do any sort of miniature milestone/micro-stone/task-based estimates to track work inside a sprint. If I'm doing sequential development without Scrum, I need those detailed estimates to plan and track the team's work. If I'm using Scrum, once I've started the sprint I don't need estimation to track the day-to-day work, because I know where I'm going to be in two weeks and there's no real value added by predicting where I'll be day-by-day within that two week sprint.
That doesn't eliminate the need for estimates in Scrum entirely, however. I still need an estimate during sprint planning to determine how much functionality to commit to for that sprint. Backing up earlier in the project, before the project has even started, businesses need estimates for all the business purposes described above, including deciding whether to do the project at all. They also need to decide how many people to put on the project, how much to budget for the project, and so on. Treating all the requirements as emergent on a project is fine for some projects, but you still need to decide whether you're going to have a one-person team treating requirements as emergent, or a five-person team, or a 50-person team. Defining team size in the first place requires estimation.
7. Estimation and planning are not the same thing, and you can estimate things that you can’t plan.
Many of the examples given in support of #NoEstimates are actually indictments of overly detailed waterfall planning, not estimation. The simple way to understand the distinction is to remember that planning is about “how” and estimation is about “how much.”
Can I “estimate” a chess game, if by “estimate” I mean how each piece will move throughout the game? No, because that isn’t estimation; it’s planning; it’s “how.”
Can I estimate a chess game in the sense of “how much”? Sure. I can collect historical data on the length of chess games and know both the average length and the variation around that average and predict the length of a game.
More to the point, estimating an individual software project is not analogous to estimating one chess game. It’s analogous to estimating a series of chess games. People who are not skilled in estimation often assume it’s more difficult to estimate a series of games than to estimate an individual game, but estimating the series is actually easier. Indeed, the more chess games in the set, the more accurately we can estimate the set, once you understand the math involved.
This all goes back to the idea that we need estimates for different purposes at different points in a project. An agile project may be about "steering" rather than estimating once the project gets underway. But it may not be allowed to get underway in the first place if there aren't early estimates that show there's a business case for doing the project.
Plans are strategies for the success of the project. What accomplishments must occur, how those accomplishments are assessed in units of measure meaningful to the decision makers are the start of Planning. Choices made during the planned process and most certainly during the execution process are informed by estimates of future outcomes from the decisions made today and the possible decision made in the future. This is the basis of Microeconomics of decision making.
Strategy making is many times used by #NoEstimates advocates when they are actually applying operational effectiveness. Strategic decision making is a critical success factor for non-trivial projects.
8. You can estimate what you don’t know, up to a point.
In addition to estimating “how much,” you can also estimate “how uncertain.” In the #NoEstimates discussions, people throw out lots of examples along the lines of, “My project was doing unprecedented work in Area X, and therefore it was impossible to estimate the whole project.” This is essentially a description of the common estimation mistake of allowing high variability in one area to insert high variability into the whole project's estimate rather than just that one area's estimate.
Most projects contain a mix of precedented and unprecedented work (also known as certain/uncertain, high risk/low risk, predictable/unpredictable, high/low variability--all of which are loose synonyms as far as estimation is concerned). Decomposing the work, estimating uncertainty in each area, and building up an overall estimate that includes that uncertainty proportionately is one technique for dealing with uncertainty in estimates.
Why would that ever be needed? Because a business that perceives a whole project as highly risky might decide not to approve the whole project. A business that perceives a project as low to moderate risk overall, with selected areas of high risk, might decide to approve that same project.
You can estimate anything that is knowable. You personally may not know it - so go find someone who does, do research, "explore," experiment, build models, build prototypes. Do what ever is necessary to improve your knowledge (epistemology) of the uncertainties and improve your understanding of the natural variance (aleatory uncertainty). But if it's knowable, then don't say it's unknown. It's just unknown to you.
The classic error and unbounded hubris about estimates comes from Donald Rumsfeld when he used the Unknown Unknowns in the first Iraq war. he never read The Histories, Herodotus, 5th Century B.C. Where the author told the reader "don't go to what is now Iraq," the tribal powers will never comply with your will. Same for what is now Afghanistan, where Alexander the Great was ejected by the local tribesman.
9. Both estimation and control are needed to achieve predictability.
Much of the writing on Agile development emphasizes project control over project estimation. I actually agree that project control is more powerful than project estimation, however, effective estimation usually plays an essential role in achieving effective control.
Closed loop control and especially feedforward adaptive control requires making estimates for future states - before they unfavorably impact the outcome. This means estimating. Software development is a closed loop adaptive control system.
To put this in Agile Manifesto-like terms:
We have come to value project control over project estimation,
as a means of achieving predictability.
My 1st disagreement with Steve. Control is based on estimating. Both needed in any close loop control system. By the Way the conjecture used of slicing is not Close Loop Control. There is no steering target. The slicing data does not say what the performance (how many slices or what ever units you want) are Needed to meet the goals of the project. Slicing is Open Loop Control. The #Noestimates advocates need to pick up any "Control System" book to see how this works.
As in the Agile Manifesto, we value both terms, which means we still value the term on the right.
#NoEstimates seems to pay lip service to both terms, but the emphasis from the hashtag onward is really about discarding the term on the right. This is another case where I believe the right answer is both/and, not either/or.
I wrote an essay when I was Editor in Chief of IEEE Software called "Sitting on the Suitcase" that discussed the interplay between estimation and control and discussed why we estimate even though we know the activity has inherent limitations. This is still one of my favorite essays.
10. People use the word "estimate" sloppily.
No doubt. Lack of understanding of estimation is not limited to people tweeting about #NoEstimates. Business partners often use the word “estimate” to refer to what would more properly be called a “planning target” or “commitment.”
The word "estimate" does have a clear definition, for those who want to look it up.
The gist of these definitions is that an "estimate" is something that is approximate, rough, or tentative, and is based upon impressions or opinion. People don't always use the word that way, and you can see my video on that topic here.
Better yet how about definition from the actual estimating community
- Software Cost Estimation with COCOMO II
- Software Sizing and Estimating
- Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban and Scrum Projects using Monte-Carlo Simulation
- Estimating Software-Intensive Systems" Project, Products, and Processes
- Making Hard Decisions
- Forecasting Methods and Applications
- Probability Methods for Cost Uncertainty Analysis
- Cost Estimate Classification System, AACEI
- Cost Estimating Body of Knowledge, ICEAA
- Parametric Estimating Handbook, ICEAA
- Basic Software Cost Estimating, CEB 09, ICEAA Online
The last opens with "“Any sufficiently advanced technology is indistinguishable from magic.” - Arthur C. Clarke. This may be one of the Root Causes for the #NoEstimates advocates. They've encountered a sufficiently advanced technology and see it as magic and therefore not within their grasp
There is no need to redefine anything. The estimating community has done that already.
Because people use the word sloppily, one common mistake software professionals make is trying to create a predictive, approximate estimate when the business is really asking for a commitment, or asking for a plan to meet a target, but using the word “estimate” to ask for that. It's common for businesses to think they have a problem with estimation when the bigger problem is with their commitment process.
We have worked with many companies to achieve organizational clarity about estimates, targets, and commitments. Clarifying these terms makes a huge difference in the dynamics around creating, presenting, and using software estimates effectively.
11. Good project-level estimation depends on good requirements, and average requirements skills are about as bad as average estimation skills.
A common refrain in Agile development is “It’s impossible to get good requirements,” and that statement has never been true. I agree that it’s impossible to get perfect requirements, but that isn’t the same thing as getting good requirements. I would agree that “It is impossible to get good requirements if you don’t have very good requirement skills,” and in my experience that is a common case. I would also agree that “Projects usually don’t have very good requirements,” as an empirical observation—but not as a normative statement that we should accept as inevitable.
If you don't know where you are going,you'll end up someplace else - Yogi Berra'
Figure it out, don't put up with being lazy. Use Capabilities Based Planning to elicit the requirements. What do you want this thing to do when it's done? Don't know, then why are you spend the customers money to build something.
Agile is essentially spending the customers money to find out what the customer doesn't know. Ask first, is this the best use of the money?
Like estimation skill, requirements skill is something that any true software professional should develop, and the state of the art in requirements at this time is far too advanced for even really smart people to invent everything they need to know on their own. Like estimation skill, a person is not going to learn adequate requirements skills by reading blog entries or watching short YouTube videos. Acquiring skill in requirements requires focused, book-length self-study or explicit training or both.
If your business truly doesn’t care about predictability (and some truly don’t), then letting your requirements emerge over the course of the project can be a good fit for business needs. But if your business does care about predictability, you should develop the skill to get good requirements, and then you should actually do the work to get them. You can still do the rest of the project using by-the-book Scrum, and then you’ll get the benefits of both good requirements and Scrum.
From my point of view, I often see agile-related claims that look kind of like this, What practices should you use if you have:
- Mediocre skill in Estimation
- Mediocre skill in Requirements
- Good to excellent skill in Scrum and Related Practices
Not too surprisingly, the answer to this question is, Scrum and Related Practices. I think a more interesting question is,What practices should you use if you have:
- Good to excellent skill in Estimation
- Good to excellent skill in Requirements
- Good to excellent skill in Scrum and related practices
Having competence in multiple areas opens up some doors that will be closed with a lesser skill set. In particular, it opens up the ability to favor predictability if your business needs that, or to favor flexibility if your business needs that. Agile is supposed to be about options, and I think that includes the option to develop in the way that best supports the business.
12. The typical estimation context involves moderate volatility and a moderate levels of unknowns
Ron Jeffries writes, “It is conventional to behave as if all decent projects have mostly known requirements, low volatility, understood technology, …, and are therefore capable of being more or less readily estimated by following your favorite book.” I don’t know who said that, but it wasn’t me, and I agree with Ron that that statement doesn’t describe most of the projects that I have seen.
The color of Ron's sky must not be blue - the normal color. Every project we work has volatile requirements.
Don't undertake a project unless it is manifestly important and nearly impossible. - Edwin Land
For enterprise IT there are databases showing the performance of past projects
I think it would be more true to say, “The typical software project has requirements that are knowable in principle, but that are mostly unknown in practice due to insufficient requirements skills; low volatility in most areas with high volatility in selected areas; and technology that tends to be either mostly leading edge or mostly mature." In other words, software projects are challenging, but the challenge level is manageable. If you have developed the full set of skills a software professional should have, you will be able to overcome most of the challenges or all of them.
Of course there is a small percentage of projects that do have truly unknowable requirements and across-the-board volatility. I consider those to be corner cases. It’s good to explore corner cases, but also good not to lose sight of which cases are most common.
13. Responding to change over following a plan does not imply not having a plan.
It’s amazing that in 2015 we’re still debating this point. Many of the #NoEstimates comments literally emphasize not having a plan, i.e., treating 100% of the project as emergent. They advocate a process—typically Scrum—but no plan beyond instantiating Scrum.
Plans are strategies for success of the projects. Strategies are hypothesis. Hypothesis's need tests (experiments) to continually validate them. Ron can lecture us all he wants. But agile is a SW Development paradigm embedded in a larger strategic development paradigm and plans come from there. That's how enterprises function. Both are needed.
According to the Agile Manifesto, while agile is supposed to value responding to change, it also is supposed to value following a plan. The Agile Manifesto says, "there is value in the items on the right" which includes the phrase "following a plan."
While I agree that minimizing planning overhead is good project management, doing no planning at all is inconsistent with the Agile Manifesto, not acceptable to most businesses, and wastes some of Scrum's capabilities. One of the amazingly powerful aspects of Scrum is that it gives you the ability to respond to change; that doesn’t imply that you need to avoid committing to plans in the first place.
My company and I have seen Agile adoptions shut down in some companies because an Agile team is unwilling to commit to requirements up front or refuses to estimate up front. As a strategy, that’s just dumb. If you fight your business about providing estimates, even if you win the argument that day, you will still get knocked down a peg in the business’s eyes.
I've commented in other contexts that I have come to the conclusion that most businesses would rather be wrong than vague. Businesses prefer to plant a stake in the ground and move it later rather than avoiding planting a stake in the ground in the first place. The assertion that businesses value flexibility over predictability is Agile's great unvalidated assumption. Some businesses do value flexibility over predictability, but most do not. If in doubt, ask your business.
If your business does value predictability, use your velocity to estimate how much work you can do over the course of a project, and commit to a product backlog based on your demonstrated capacity for work. Your business will like that. Then, later, when your business changes its mind—which it probably will—you’ll still be able to respond to change. Your business will like that even more.
14. Scrum provides better support for estimation than waterfall ever did, and there does not have to be a trade off between agility and predictability.
Not quite true. Waterfall projects have excellent estimating processes. Trouble is during the execution of the project things change. When the Plan and the Estimate aren;'t updated to match this change - which is one of the root causes of project failure- then the estimate becomes of little use. Apply agile processes to estimating is the same as applying agile processes to codinig. Frequent assessments of progress to plan and corrective actions when variances appear.
Some of the #NoEstimates discussion seems to interpret challenges to #NoEstimates as challenges to the entire ecosystem of Agile practices, especially Scrum. Many of the comments imply that estimation will somehow impair agility. The examples cited to support that are mostly examples of unskilled misapplications of estimation practices, so I see them as additional examples of people not understanding estimation very well.
The idea that we have to trade off agility to achieve predictability is a false trade off. If we define "agility" to mean, "no notion of our destination" or "treat all the requirements on the project as emergent," then of course there is a trade off, by definition. If, on the other hand, we define "agility" as "ability to respond to change," then there doesn't have to be any trade off. Indeed, if no one had ever uttered the word “agile” or applied it to Scrum, I would still want to use Scrum because of its support for estimation and predictability, as well as for its support for responding to change.
The combination of story pointing, velocity calculation, product backlog, short iterations, just-in-time sprint planning, and timely retrospectives after each sprint creates a nearly perfect context for effective estimation. To put it in estimation terminology, story pointing is a proxy based estimation technique. Velocity is calibrating the estimate with project data. The product backlog (when constructed with estimation in mind) gives us a very good proxy for size. Sprint planning and retrospectives give us the ability to "inspect and adapt" our estimates. All this means that Scrum provides better support for estimation than waterfall ever did.
If a company truly is operating in a high uncertainty environment, Scrum can be an effective approach. In the more typical case in which a company is operating in a moderate uncertainty environment, Scrum is well-equipped to deal with the moderate level of uncertainty and provide high predictability (e.g., estimation) at the same time.
15. There are contexts where estimates provide little value.
I don’t estimate how long it will take me to eat dinner, because I know I’m going to eat dinner regardless of what the estimate says. If I have a defect that keeps taking down my production system, the business doesn’t need an estimate for that because the issue needs to get fixed whether it takes an hour, a day, or a week.
The most common context I see where estimates are not done on an ongoing basis and truly provide little business value is online contexts, especially mobile, where the cycle times are measured in days or shorter, the business context is highly volatile, and the mission truly is, “Always do the next most useful thing with the resources available.”
In both these examples, however, there is a point on the scale at which estimates become valuable. If the work on the production system stretches into weeks or months, the business is going to want and need an estimate. As the mobile app matures from one person working for a few days to a team of people working for a few weeks, with more customers depending on specific functionality, the business is going to want more estimates. As the group doing the work expands, they'll need budget and headcount, and those numbers are determined by estimates. Enjoy the #NoEstimates context while it lasts; don’t assume that it will last forever.
Start with Value At Risk. What are you willing to lose if your estimate is wrong. Then decides if the cost of estimating covers than risk.
16. This is not religion. We need to get more technical and more economic about software discussions.
I’ve seen #NoEstimates advocates treat these questions of requirements quality, estimation effectiveness, agility, and predictability as value-laden moral discussions. "Agile" is a compliment and "Waterfall" is an invective. The tone of the argument is more moral than economic. The arguments are of the form, "Because this practice is good," rather than of the form, "Because this practice supports business goals X, Y, and Z."
That religion isn’t unique to Agile advocates, and I’ve seen just as much religion on the non-Agile sides of various discussions. It would be better for the industry at large if people could stay more technical and economic more often.
Agile is About Creating Options, Right?
I subscribe to the idea that engineering is about doing for a dime what any fool can do for a dollar, i.e., it's about economics. If we assume professional-level skills in agile practices, requirements, and estimation, the decision about how much work to do up front on a project should be an economic decision about which practices will achieve the business goals in the most cost-effective way. We consider issues including the cost of changing requirements and the value of predictability. If the environment is volatile and a high percentage of requirements are likely to spoil before they can be implemented, then it’s a bad economic decision to do lots of up front requirements work. If predictability provides little or no business value, emphasizing up front estimation work would be a bad economic decision.
On the other hand, if predictability does provide business value, then we should support that in a cost-effective way. If we do a lot of the requirements work up front, and some requirements spoil, but most do not, and that supports improved predictability, that would be a good economic choice.
The economics of these decisions are affected by the skills of the people involved. If my team is great at Scrum but poor at estimation and requirements, the economics of up front vs. emergent will tilt toward Scrum. If my team is great at estimation and requirements but poor at Scrum, the economics will tilt toward estimation and requirements.
Of course, skill sets are not divinely dictated or cast in stone; they can be improved through focused self-study and training. So we can treat the decision to invest in skills development as an economic issue too.
Decision to Develop Skills is an Economic Decision Too
What is the cost of training staff to reach competency in estimation and requirements? Does the cost of achieving competency exceed the likely benefits that would derive from competency? That goes back to the question of how much the business values predictability. If the business truly places no value on predictability, there won’t be any ROI from training staff in practices that support predictability. But I do not see that as the typical case.
My company and I can train software professionals to approach competency in both requirements and estimation in about a week. In my experience most businesses place enough value on predictability that investing a week to make that option available provides a good ROI to the business. Note: this is about making the option available, not necessarily exercising the option on every project.
My company and I can also train software professionals to approach competency in a full complement of Scrum and other Agile technical practices in about a week. That produces a good ROI too. In any given case, I would recommend both sets of training. If I had to recommend only one or the other, sometimes I would recommend starting with the Agile practices. But my real recommendation is to "embrace the and" and develop both sets of skills.
For context about training software professionals to "approach competency" in requirements, estimation, Scrum, and other Agile practices, I am using that term based on work we've done with our Professional Development Ladder. In that ladder we define capability levels of "Introductory," "Competence," "Leadership," and "Mastery." A few days of classroom training will advance most people beyond Introductory and much of the way toward Competence in a particular skill. Additional hands-on experience, mentoring, and feedback will be needed to cement Competence in an area. Classroom study is just one way to acquire these skills. Self-study or working with an expert mentor can work about as well. The skills aren't hard to learn, but they aren't self-evident either. As I've said above, the state of the art in estimation, requirements, and agile practices has moved well beyond what even a smart person can discover on their own. Focused professional development of some kind or other is needed to acquire these skills.
Is a week enough to accomplish real competency? My company has been training software professionals for almost 20 years, and our consultants have trained upwards of 50,000 software professionals during that time. All of our consultants are highly experienced software professionals first, trainers second. We don't have any methodological ax to grind, so we focus on what is best for each individual client. We all work hands-on with clients so we know what is actually working on the ground and what isn't, and that experience feeds back into our training. We have also also invested heavily in training our consultants to be excellent trainers. As a result, our service quality is second to none, and we can make a tremendous amount of progress with a few days of training. Of course additional coaching, mentoring and support are always helpful.
17. Agility plus predictability is better than agility alone.
Agility in the absence of steering targets created by estimating in the presence of uncertainty of of little value. Any Closed Loop Control systems requires rapid response to changing conditions and a steering signal that may required an estimate of where we want to be when we arrive.
Skills development in practices that support estimation and predictability vs. practices that support agility is not an either/or choice. A truly agile business would be able to be flexible when needed, or predictable when needed. A true software professional will be most effective when skilled in both skill sets.
If you think your business values agility only, ask your business what it values. Businesses vary, and you might work in a business that truly does value agility over predictability or that values agility exclusively. Many businesses value predictability over agility, however, so don't just assume it's one or the other.
I think it’s self-evident that a business that has both agility and predictability will outperform a business that has agility only. With today's powerful agile practices, especially Scrum, there's no reason we can't have both.
Overall, #NoEstimates seems like the proverbial solution in search of a problem. I don't see businesses clamoring to get rid of estimates. I see them clamoring to get better estimates. The good news for them is that agile practices, Scrum in particular, can provide excellent support for agility and estimation at the same time.
My closing thought, in this hash tag-happy discussion, is that #AgileWithEstimationWorksBest -- and #EstimationWithAgileWorksBest too.
Woody has successful created what he wanted - a discussion of sorts - about estimating. Trouble is without a principled discussion it turns into personal anecdotes rather than fact based dialog. Those of us asking for fact based examples are then seen as improperly challenging the anecdotes and since there is not yet any fact based response the need to improve the probability of success for software development goes unanswered, replaced acquisitions and name calling.
The door of a bigoted mind opens outwards. The pressure of facts merely closes it more snugly.
- Ogden Nash
When there are new ideas being conjectured, it is best for the conversation to establish the principles on which those ideas can be tested. Without this the person making the conjecture has to defined the idea on personality, personal anecdotes, and personal experience alone
When I got a copy of A Practical Guide to Dealing with Difficult Stakeholders* I thought long and hard about the most difficult people I had worked with. There was the ex-army captain who scared me a bit in my first project management job. There was a security manager who was so laid back that the anti-fraud project we worked on never really got off the ground. There was a programme manager who didn’t brief me before a meeting and then didn’t turn up himself, leaving me to chair a workshop on a subject I knew nothing about.
These were all difficult, but they weren’t as difficult as some of the case studies I read in Difficult Stakeholders. Perhaps that’s because I’ve not worked in a professional services firm. Perhaps it’s because I’ve chosen my employers, teams and projects well. Maybe I inspire people to be their best possible self so they aren’t difficult for me (ha! Can’t believe I just wrote that). More likely I’ve just been lucky.
Three people who have not had the same experiences are the authors of that book. I caught up with them recently to find out what advice they had for dealing with difficult people at work.
Today I’m interviewing co-author Roger Joby, a consultant with a background in pharma and managing director of R&NR Consulting Ltd.
Hello, Roger. In what ways are project stakeholders difficult? How does that behaviour manifest itself on a project?
Hello, Elizabeth. There are many ways that stakeholders can be difficult, from out and out hostility to a lack of interest and motivation. In an ideal world a project manager would be able to select his or her team making sure that they were all highly motivated and supportive and sponsors would always take a consistent and balanced view.
In my experience reality is often falling a long way short of this ideal. Project teams often include people that would prefer to be on a different project or even in a different job and sponsors can be less than understanding when faced with the unexpected.
Apart from motivating your team and placating your sponsor you also need to be aware of the more peripheral stakeholders that can also spoil your day.
Do people know they are being difficult most of the time? If not, what is your top tip for pointing it out?
It is true that many stakeholders, particularly those that are not involved on the project as their main function e.g. members of the finance department, can often be unintentionally obstructive. In most cases this can be addressed by simply explaining the situation and the impact that they are having on the project.
Yes, I’ve had to do that before. Is the sponsor the most important stakeholder?
In most cases the sponsor will be your most important stakeholder because they have the authority. They are directly involved in the project and they can use their authority to help the project manager to influence other stakeholders.
Hopefully for the better. So what made you think of writing this book?
Project management training concentrates predominately on tools and processes, but it is people that have the biggest influence on project success.
That’s true. The latest PMBOK Guide finally includes a section on stakeholder management, and I’ve written a lot about the topic myself.
Throughout the evolution of A Practical Guide to Dealing with Difficult Stakeholders, one of the overriding objectives was to redress this balance by looking at how to deal with people.
What do you hope project managers will get out of it?
I hope that the book will act as a reality check. No matter how good your tools and system are you still have to deal with people, and those people for one reason or another may have a very different perspective about the value of your project.
Next time I interview Jake Holloway on the most unhelpful stakeholder he has ever worked with. You’ll want to read that story!
* This article contains affiliate links at no cost to you.
The Cone of Uncertainty chart comes from the original work of Barry Boehm, "Reducing Estimation Uncertainty with Continuous Estimation: Assessment Tracking with 'Cone of Uncertainty.'" In this paper Dr. Boehm speaks to the lack of continuous updating of the estimates made early in the program as the source of unfavorable cost and schedule outcomes.
As long as the projects are not re-assessed or the estimations not re-visited, the cones of uncertainty are not effectively reduced .
The Cone of Uncertainty is a notional example of how to increase the accuracy and precision of software development estimates with continuous reassessments. For programs in the federal space subject to FAR 34.2 and DFARS 34.201, reporting Estimate to Complete (ETC) and Estimates at Completion (EAC) is mandatory on a monthly basis. This is rarely done in the commercial world with the expected results shown in Todd's chart for his data and Demarco's data.
The core issue from current research at PARCA (http://www.acq.osd.mil/parca) from Root Cause Analysis (where I have worked as a support contractor) shows many of the issues are poor estimates when the program was baselined and failure to update the ETC and EAC with credible information about risks and physical percent complete
The data reported in Todd's original chart are the results of the projects based on estimates that may or may not have been credible. So the analysis of the outcomes of the completed projects is Open Loop ...
... that is the target estimate measured against the actual outcomes May or May not Have Been Against Credible Estimates. So showing project overages doesn't actually provide the needed information the correct this problem. The estimate may have been credible, but the execution failed to perform as planned.
With this Open Loop assessment it is difficult to determine any corrective actions. Todd's complete presentation "Uncertainty Surrounding Cone of Uncertainty," speaks to some of the Possible root cause of the mismatch between Estimates and Actuals. As Todd mentions in his response, this was not the purpose his chart. Rather I'd suspect just to show the existence of this gap.
The difficulty however is pointing out observations of problems, while useful to confirm there is a problem, does little to correct the underlying cause of the problem.
At a recent ICEEA conference in San Diego, Dr. Boehm and several others spoke about this estimating problem. Several books and papers were presented addressing this issue.
Software Cost Estimation Metrics Manual, Bradford Clark Raymond Madachy (Eds.)
The 2nd Edition of Probability Methods for Cost Uncertainty Analysis: A Systems Engineering Perspective, Paul R. Garvey, CRC Press.
Both these resources , and many more, speak to the Root Causes of both the estimating problem and the programmatic issues of staying on plan.
This is the Core Problem That Has To Bee Addressed
We need both good estimates and good execution to arrive as planned. There is plenty of evidence that we have an estimating problem. Conferences (ICEAA and AACE) speak to these. As well as government and FFRDC organizations (search for Root Cause Analysis here PARCA, IDA, MITRE, RAND, and SEI).
But the execution side is also a Root Cause. Much research has been done on procedures and process for Keeping the Program Green. For example the work presented at ICEAA "The Cure for Cost and Schedule Growth" where more possible Root Causes are addressed from our research.
While Todd's chart shows the problem, the community - cost and schedule community - is still struggling with the corrective action. The chart is ½ the story. The other ½ is the poor performance on the execution side IF we had a credible baseline to execute against.
To date both sides of the problem are unsolved and there for we have Open Loop Control with neither the proper steering target nor the proper control of the system to steer toward that target. Without corrections to both estimating, planning, scheduling, and execution, there is little hope in improving the probability of success in the software development domain.
Using Todd's chart from the full presentation, the core question that remains unanswered in many domains is
How can we increase the credibility of the estimate to complete earlier in the program?
- In the feasibility stage what is a credible estimate, and how can that estimate be improved as the program moves left to right?
- What are the measures of credibility?
- How can these measures be informed as the project progresses?
- What are the physical processes to assure those estimates are increasing in accuracy and precision?
By the way the term possible error comes from historical data. And like all How to Lie With Statistics charts that historical data is self selected, so a specific domain, classification of projects, and most importantly, the maturity of the organization making the estimates and executing the program.
Much research has shown the maturity of the acquirer influences the accuracy and precision of the estimates. Our poster child is Stardust, with on time, on budget, working outcomes due to both government and contractor Program Manager's maturity for managing in the presence of uncertainty. Which is one of the source of this material
 Boehm, B. “Software Engineering Economics”. Prentice-Hall, 1981.
Please help me with research for my new book, tentatively titled Online Collaboration Tools for Project Managers (that’s the working title of the second edition of Social Media for Project Managers).
This survey should only take a few minutes to do. Thank you!
Can’t see the survey form? Click here: https://docs.google.com/forms/d/1IK0eXOyaEZkn4YjAjCLpdODCXWOcH2JuLpol51bSw6M/viewform?usp=send_form
This is my last post on the topic of #NoEstimates. Let's start with my professional observation. All are welcome to provide counter examples.
Estimates have little value to those spending the money.
Estimates are of critical value to those providing the money.
Since those spending the money usually appear to not recognize the need for estimating for those providing the money, the discussion has no basis on which to exchange ideas. Without the acknowledgement that in business there are is collection of principles that are immutable, those spending the money have little understanding of where the money to do their work comes from†.
Here are the business principles that inform how the business works when funding the development of value:
- The future is uncertain, but this uncertainty can be modeled. It is not unknowable.
- Managerial Accounting provides managers with accounting information in order to better inform themselves before they decide matters within their organizations, which aids their management and performance of control functions.
- Economic Risk Management identifies, analyzes and accepts or mitigates the uncertainties encountered in the managerial decision-making processes
On the project management side, there are also immutable principles required for project success
- There is some notion of what Done looks like, in units of measure meaningful to the decision makers. Effectiveness and Performance are two standard measures in domains where systems thinking prevails
- There is a work Plan to reach Done. This Plan can be simple or it can be complex. But the order of the work and the dependencies between the work elements are the basis of all planning processes.
- There is a plan for the needed resources to reach Done. This includes staff, facilities, and funding. This means knowing something about how much and when for these resources.
- There is the recognition of the risk involved in reached Done, and a response to those risks
- There is some means of measuring physical progress to the Plan to reach Done, so corrective actions can be taken to increase the probability of success. Tangible outcomes from the Planned work is the preferred way to measure progress
The discussion - of sorts - around No Estimates has reached a low point in shared understanding. But first let me set the stage
If the Business and Project Success principles are not accepted as the basis of discussion for any improvements, then there is no basis of discussion. Stop reading there's nothing here for you. If these principles are acknowledged, then please continue
In a recent post from one of the Original authors of the #NoEstimates hashtag it was said...
Quit estimates cold turkey. Get some kind of first-stab working software into the customer’s hands as quickly as possible, and proceed from there. What does this actually look like? When a manager asks for an estimate up front, developers can ask right back, “Which feature is most important?”— then deliver a working prototype of that feature in two weeks. Deliver enough working code fast enough, with enough room for feedback and refinement, and the demand for estimates might well evaporate. Let’s stop trying to predict the future. Let’s get something done and build on that — we can steer towards better.
This is one of those over generalizations that when questioned get strong pushback to the questioner. Let's deconstruct this paragraph a bit in the context of software development.
- Quit estimates cold turkey - perhaps those paying for the work need to be consulted to determine if they have any vested interest in knowing about cost and schedule of the value they're paying for.
- Some kind of first stab software working - sounds nice. But how much does that cost? And when can that be delivered. Can any feature in the requested list - the needed capabilities and their supporting technical and operational requirements - be done in 2 weeks? Can you show that can happen, so is that just a platitude repeated enough that is has become a meme - without any actual evidence of being true?
- Deliver enough working code fast enough, with enough room for feedback and refinement, and the demand for estimates might well evaporate - there are two cascaded IF conditions here. IF we deliver working code fast enough, IF we leave room for feedback, THEN estimate MIGHT evaporate.
This last one is one of those IF PIGS COULD FLY type statements.
So Here's the Issue
If it is conjectured that we can make decisions in the presence of uncertainty - and all project work operated in the presence of uncertainty by its very definition - otherwise it'd be production - then how can we make a choice between alternatives if we can't estimate the outcomes of those choices?
This is the basis of MicroEconomics and Managerial Finance. When the OP'ers of #NoEstimates make these types of statements they're doing so on the their volition. It's likely their strongly held belief that decisions can be made without estimating the outcomes of those decisions.
So when questioned about what principles these conjectures are based on returns shorn for asking, acquisitions of trolling, being rude, having no respect for the person making these unfounded, unsubstantiated, untested, domain free statements, it seems almost laughable. At times it appears to be willful ignorance of the basic tenants of business decision making. I don't pretend to know what's in the minds of many #NE supporters. Having talked to some advocates who are skeptical, it turns out when questioning further they are unwilling to disavow the notion that there is merit in exploring further.
This is a familiar course for climate change deniers. All the evidence is not in, so let's challenge everything and see what we can discover. This notion of challenging and exploring in the absence of established principles is not that useful actually. In a domain like managerial finance, Microeconomics of software development decision making, in the realm of decision making in general, the principles and practices are well established.
What's now know is actually that those principles and practicers are not know to those making the conjecture that we should challenge everything. Much like the political climate deniers - Well I'm not a scientist but I heard on the internet there is some dissent in the measurements ... So I'm not familiar with probability and statistics and haven't taken a microeconomics class or read any Managerial Finance books, But almost sure that those self proclaimed thought leaders for #NoEstimates have something worth looking into.
Harsh, you bet it's harsh. Any idea presented in open forum will be challenged when that idea willfully violate the principles on which business operates. Better be prepared to be challenged and better be prepared to bring evidence your conjecture has merit. This happens all the time in science, mathematics, and engineering. Carl Sagan's BS Detector is one place to start. Or John Baez's Crack Pot Index are useful in the science and math world.
No Estimates has now reached that level, with some outrageous claims.
- Not doing estimates improves project performance by 10X
- Estimates are actually evil
- Estimating destroys innovation
- Steve McConnell proves in his book estimates can't be done.
- Todd Littles Figure 2shows how bad we are at estimating - without of course reading the rest of the presentation showing how to correct these errors.
Making Credible Decisions in the Presence of Uncertainty
Decision making is the basis of business management. Here's an accessible text for learning to making decisions in the presence of uncertainty, Decision Analysis for the Professional. When there is any suggestion that decision can be made without estimate ask if the personal making that conjecture has an evidence this is possible. Ask if they're read this book. Ask if their decision making process has:
- A decision making framework
- A decision making process
- A methodology for making decisions
- How this decision making process works in the presence complex organizations
- A probability and statistics model for making decsions.
Here's some more background on making decisions in the presence of uncertainty.
- Business Portfolio Management: valuation, Risk Assessment, and EVA Strategies, Michael Allen
- Real Options: Managing Strategic Investments in an Uncertain Work, Martha Amran, Harvard Business School
- Making Hard Decisions: An Introduction to Decision Analysis, Robert Clemen.
Decision Making Under Uncertainty: Models and Choices, Charles Holloway.
This is a sample of the many resources available for making decisions in the presence of uncertainty. There is also a large collection of estimating software development projects. The one we use in our work is
- Estimating Software Intensive Systems: Project, Products, and Processes, Richard Stutzke.
This an other resources are the basis of understanding how to make decision.
When it is conjectured we can decide with estimating, ask have you any evidence what so ever this is possible beyond your personal opinion and anecdotal experience? No? Then please stop trying to convince me your unsubstantiated idea has any merit in actual business practice.
And this is why I've decided to stop writing about the nonsense of #NoEstimates. There is no basis for the discussion anchored in principles, practices, or processes of business based in managerial finance and Microeconomics of decision making.
It's a House Built On Sand
† I learned this in the first week of my first job after graduate school.
Decisions are about making Trade Offs for the project that are themselves about:
- Evaluating alternatives.
- Integrating and balancing all the considerations (cost, performance, Producibility, testability, supportability, etc.).
- Developing and refining the requirements, concepts, capabilities of the product or services produced by the project or product development process.
- Making trade studies and the resulting trade offs that enables the selection of the best or most balanced solution to fulfill the business need or accomplishment of the mission.
The purpose of this process is to:
- Identify the trade-offs – the decisions to be made – among requirements, design, schedule, and cost.
- Establish the level of assessment commensurate with cost, schedule, performance, and risk impact based on the value at risk for the decision.
- Low value at risk, low impact, simple decision making – possibly even gut feel.
- High value at risk, high impact, the decision-making process must take into account these impacts.
Making decisions about capabilities and resulting requirements is the start of discovering what DONE looks like, by:
- Establishing alternatives for the needed performance and functional requirements.
- Resolving conflicts between these requirements in terms of the product’s delivered capabilities.
Decisions about the functional behaviors and their options is next. These decisions:
- Determine preferred set of requirements for the needed capabilities. This of course is an evolutionary process as requirements emerge, working products are put to use, and feedback is obtained.
- Determine the customer assesses requirements for lower-level functions as each of the higher-level capabilities are assessed.
- Evaluate alternatives to each requirement, each capability, and the assessed value of each capability – in units of measure meaningful to the decision makers.
Then comes the assessment the cost effectiveness of each decision:
- Develop the Measures of Effectiveness and Measures of Performance for each decision.
- Identify the critical Measures of Effectiveness of each decision in fulfilling the project’s business goal or mission. These Technical Performance Measures are used to assess the impact of each decision on the produced value of the project.
Each of these steps is reflected in the next diagram.
Value of This Approach
When we hear that estimates are not needed to make decisions, we need to ask how the following questions can be answered:
- How can we have systematized thought process, where the decisions are based on measureable impacts?
- How can we clarify our options, problem structure, and available trade-offs using units of measure meaningful to the decision makers?
- How can we improve communication of ideas and professional judgment within our organization through a shared exchange of the impacts of our decisions?
- How can we improve communication of rationale for each decision to others outside the organization?
- How can we be assured of our confidence that all available information has been accounted for in a decision?
The decision making process is guided by the identification of alternatives
Decision-making is about deciding between alternatives. These alternatives need to be identified, assessed, and analyzed for their impact on the probability of success of the project.
These impacts include, but are not limited to:
- And all the other …ilities associated with the outcomes of the project
The effectiveness of our decision making follows the diagram below:
In the End - Have all the Alternatives Been Considered?
Until there is a replacement for the principles of Microeconomics, for each decision made on the project, we will need to know the impact on cost, schedule, technical parameters, and other attributes of that decision. To not know those impacts literally violates the principles of microeconomics and the governance framework of all business processes, where the value at risk is non-trivial.
When you hear planning ahead, by assessing our alternatives is overrated, quit estimating cold turkey think again. And ask evidence of how to make decisions in the presence of uncertainty with making estimates, making trade-offs, evaluating alternatives - probabilistic alternatives - and all the those other decision making processes found in your managerial finance book you read in engineering, computer science, or business school
At a recent conference the discussion of the integration of Agile with Earned Value Management on programs subject to FAR 34.201 and DFARS 252.234-7001 was the topic. Here's my presentation.
How To Lie With Statisticsis a critically important book to have on your desk if you're involved any decision making. My edition is a First Edition, but I don't have the dust jacket, so not worth that much beyond the current versions.
The reason for this post is to lay the ground work for assessing reports, presentations, webinars, and other selling documents that contain statistical information.
The classic statistical misuse if the Standish Report, describing the success and failure of IT projects.
Here's my summation on the elements of How To Lie in our project domain
- Sample with the Built In Bias - the population of the sample space is not defined. The samples are self selected in that those who respond are the basis of the statistics. No adjustment for all those who did not respond to a survey for example.
- The Well Chosen Average - The arithmetical average, Median, and Mode are estimators of the population statistics. Any of these without a variance is of little value for decision making.
- Little Figures That Are Not There - the classic is use this approach (in this case #NoEstimates) and your productivity will improve 10X, that 1000% by the way. A 1000% improvement. That's unbelievable, literally unbelievable. The actual improvements are stated, only the percentage. The baseline performance is not stated. It's unbelievable.
- Much Ado About Practically Nothing - the probability of being in the range of normal. This is the basis of advertising. What's the variance?
- Gee-Whiz Graphs - using graphics and adjustable scales provides the opportunity to manipulate the message. The classic example of this is the estimating errors in a popular graph used by the No Estimates advocates. It's a graph showing the number of projects that complete over there estimated cost and schedule. What's not shown is the credibility of the original estimate.
- One Dimensional Picture - using a picture to show numbers, where the picture is not in the scale as the numbers provides a messaging path for visual readers.
- Semi-attached Picture - If you can't prove what you want to prove, demonstrate something else and pretend that they are the same thing. In one example, the logic is inverted. Estimating is conjectured to be the root cause of problems. With no evidence of that, the statement we don't see how estimating can produce success, so not estimating will increase the probability of success.
- Post Hoc Rides Again - posy hoc causality is common in the absence of a cause and effect understanding. The correlation and causality differences are many times not understood.
Here's a nice example of How To Lie
There's a chart from an IEEE Computer article showing the numbers of projects that exceeded their estimated cost. But let's start with some research on the problem. Coping with the Cone of Uncertainty.
There is a graph, popularly used to show that estimates
This diagram is actually MISUSED by the #NoEstimates advocates. The presentation below shows the follow on information for how estimates can be improved the increase the confidence in the process and improvements in the business. So before anyone accepts any conjecture from a #NoEstimates advocate using the graph above, please read the briefing at the link below to see the corrective actions for making poor estimates.
Here's the link to Todd's entire briefing not just the many times misused graph of estimates not representing the actuals Uncertainty Surrounding the Cone of Uncertainty.
Much of what we do today relies on other people – other people who don’t work for us. Whether it was as part of a project or another professional interaction, I’m sure you have met colleagues who aren’t taking responsibility at work for their tasks.
It’s a pain to work with coworkers who won’t step up. It means your job turns into micromanaging, spoon feeding and generally running around after people who probably earn quite enough to be operating in a more professional manner. It takes up a lot of time too: time that could be better spent doing your own job.
If this sounds like your work environment then there are things you can do to change it. In this article we’ll look at why people don’t take responsibility for tasks and also how you can influence the culture of the office so that taking responsibility becomes the norm.
I should say now that changing behaviour is hard work. I don’t promise to be able to do it in 1500 words, but there are some fool-proof strategies coming up that you can start putting into practice today.
Why don’t they take responsibility?
Unfortunately there isn’t a clear cut answer to this. There could be any number of reasons why someone isn’t taking full responsibility for their tasks, even if their job description says they should. You’ll have to have a conversation with them about their performance to try to uncover why they aren’t pulling their weight on the team.
Here are 7 reasons for people shirking their workload and what you can do about it.
They have never been given responsibility before so they don’t know what to do now they have it.
You can: Be clear that they are responsible. Tell them that they have the authority to make decisions and in what situations. A roles and responsibilities document or a RACI matrix can help with this. Once they have signed up to their job description and the responsibilities that brings, hold them accountable. You might be pleasantly surprised by their change in attitude once they know what is expected of them.
#1: They are lazy.
You can:Take a stand. Fortunately I rarely have to work with people who can’t be bothered. That lack of commitment is a problem for me, so I would be putting them under formal performance management (or asking their manager to do it). Their options are shape up or get out. Your team is too busy to carry someone who doesn’t want to be there.
#2: They don’t have the skills to do the work.
You can: Find out if they are struggling. Maybe the work is too hard or too complex for them to manage alone. If they don’t have the skills to complete their tasks it will look like they aren’t taking fully responsibility but actually they can’t. Offer training if necessary. Worst case scenario involves removing them from your project team and replacing them with someone who can do the job.
It is also worth looking into the Situational Leadership model if you aren’t familiar with it. It is based on assessing the Skill (ability) and Will (attitude) of the people on the team and changing your management style accordingly.
People who are willing but lack the skill are keepers – you can coach them through their tasks or offer them training. Then hopefully they will take responsibility for future similar activities.
#3: They don’t know how to manage their time or tasks.
You can: Help. Everyone can learn how to manage their time and their workload. You are probably quite good at it and have a lot to offer: developing your team is also part of the project manager’s responsibility. The difficulty here is that it’s embarrassing to admit that you are not. Therefore you might have to guess that this is the real reason why they aren’t stepping up.
#4: They don’t have the time.
You can: Help them focus. This is a different problem to the one above. In this situation they have their ‘normal’ work to do and they don’t have the time to take on tasks for you as well. They need help defining their priorities so they aren’t spending time on low priority work when they should be doing high priority project tasks.
You may have to involve their line manager or the other people giving them tasks as you can’t just waltz in and say your work is more important. Prioritise everything together and if your work doesn’t come out on top then you’ll have to manage around that decision.
#5: The work is too easy or they think it is boring.
You can:Find ways to make work more interesting or appealing. People who don’t take their project work seriously or don’t want to engage with you or their tasks lack the will to work on what you have allocated to them. Making it more challenging by giving them responsibility might actually make it better for them.
#6: They never signed up to the plan and they think it’s unrealistic.
You can: Avoid this situation by letting people set their own deadlines. Or at least negotiate the deadlines with them – don’t impose deadlines on people. There’s more in this article about how to manage your project plan when your team doesn’t believe in the schedule.
#7: They don’t trust or respect their colleagues.
You can: Panic (just a little bit, away from anyone who will see you) because this is a difficult situation to resolve. If the team doesn’t get on because there is baggage and a general feeling that there’s no point in working together because their colleagues are losers, then that’s a challenge.
You probably won’t have the full history of what has gone on. Focus on building good relationships between individuals and let the team come together as a team later. You’ll also have to get to the bottom of why there is no respect: what is it exactly that individuals have done that has made the respect inherent in the workplace disappear?
That explains some of the reasons why people might fail to step up on the team. Now let’s look at more things that you can do about it.
6 Ways to influence your team to take more responsibility
- Make it easier to work together than work separately. Use collaboration tools, shared network drives, email distribution lists. Get everyone on them using them correctly.
- Reward sharing/collaboration by calling it out and thanking people for their efforts to work together.
- Team building that focuses on explaining what other people actually do. When you don’t understand your colleagues’ jobs it is too easy to believe they do nothing all day but create problems for you.
- Get everyone to sit together. Move desk allocations around so the people who work together the most sit closest to each other.
- Set up a shared conference call facility and encourage them to use it. If they don’t, call them out on it: “No one used the conference call number this week, didn’t you have anything to talk about?”
Finally, here is a technique I learned from Get-It-Done Guy (a great resource, if you don’t know him already).
Work together to set the deadline. Let’s say that Emily is going to prepare an options appraisal by Wednesday. She agrees to that. You book a meeting for Wednesday afternoon. You say, “If you’ve already sent it to me by then and I don’t have any questions, we’ll cancel that meeting.”
Emily then has an incentive to do the work by the deadline and get the meeting cancelled.
If she doesn’t deliver, you hold the meeting and use it to work with her or stand over her until you get what you want. This is not the ideal use of your time (or hers) but at least you get what you want by the deadline and it doesn’t hold you up.
If she does deliver, cancel the meeting and everyone is happy.
I really like this approach and am looking forward to trying it out. Luckily for me, I don’t work with very many flaky people so I might have to keep this technique in reserve until I really need it.
Enable responsibility, not passivity
Be responsible for helping people take responsibility.
Facilitate responsibility by not rising to these situations. It also helps break down silos because you force the right people to speak to each other.
When people look at you for the answer, pass it on. “Adam, that’s a question for you.” Don’t micromanage – because that is what they are looking for.
Tom Kendrick’s book Results Without Authority* is very good on influencing and how to make people do things when they don’t work for you.
Stever Robbins, the Get-It-Done Guy, is the author of Get-It-Done Guy’s 9 Steps to Work Less and Do More. Read my review of his book here (short review: one of my all-time favourites on productivity and time management). Or just buy it on Amazon.
* This article contains affiliate links at no cost to you.
In a recent post of forecasting capacity planning a time series of data was used as the basis of the discussion.
Some static statistics were then presented.
With a discussion of the upper and lower ranges of the past data. The REAL question though is what is the likely outcomes for data in the future given the past performance data. That is if we recorded what happened in the past, what is the likely data in the future?
The average and upper and lower ranges from the past data are static statistics. That is all the dynamic behavior of the past is wiped out in the averaging and deviation processes, so that information can no longer be used to forecast the possible outcomes of the future.
This is one of the attributes of The Flaw of Averages and How to Lie With Statistics, two books that should be on every managers desk. That is managers tasked with making decisions in the presence of uncertainty when spending other peoples money.
We now have a Time Series and can ask the question what is the range of possible outcomes in the future given the values in the past? This can easily be done with a free tool - R. R is a statistical programming language that is free from the Comprehensive R Archive Network (CRAN). In R, there are several functions that can be used to make these forecasts. That is what are the estimated values in the future form the past and their confidence intervals.
Let's start with some simple steps:
- Record all the data in the past. For example make a text file of the values in the first chart. Name that file NE.Numbers
- Start the R tool. Better yet download an IDE for R. RStudio is one. That way there is a development environment for your statistical work. As well there are many Free R books on statistical forecasting - estimating outcomes in the future.
- OK, read the Time Series of raw data from the file of value as assign it to a Variable
- The ts function converts the Time Series into an object - a Time Series - that can be used by the next function
- With the Time Series now in the right format, apply the ARIMA function. ARIMA is Autoregressive Integrated Moving Average. Also know as the Box-Jenkins algorithm. The is George Box of the famously misused and blatantly abused quote all models are wrong some models are useful. If you don't have the full paper where that quote came from and the book Time Series Analysis: Forecasting and Control, Box and Jenkins, please resist re-quoting out of context. That quoyte has become the meme for those not having the background to do the math for time series analysis and it becomes a mantra for willfully ignoring the math needed to actually make estimates of the future - forecasting - using time series of the past in ANY domain. ARIMA is the beginning basis of all statistical forecasting, the science, engineering, and finance.
- The ARIMA algorithm has three parameters - p, d, q
- p is the order of the autoregressive model.
- d is the degree of he differencing
- q is the order of the moving average
- Here's the manual in R for ARIMA
- With the original data turned into a Time Series and presented to the ARIMA function we can now apply the Forecast function. This function provides methods and tools for displaying and analyzing univariate time series forecasts including exponential smoothing via state space models and automatic ARIMA modelling.
- When applied to the ARIMA output we get a Forecast series that can be plotted.
Here's what all this looks like in RStudio:
NETS=ts(NE.Numbers) - convert the raw numbers to a time series
NETSARIMA=arima(NETS, c=order(0,1,1)) - make an ARIMA object
NEFORECAST = forecast(NETSARIMA) - make a forecast using that
plot(NEFORECAST) - plot it
Here's the plot, with the time series from the raw data and the 80% and 90% confidence bands on the possible outcomes in the future.
The Punch Line
You want to make decisions with other peoples money when the 80% confidence in a possible outcome is itself a - 56% to +68% variance? really. Flipping coins gets a better probability of an outcome inside all the possible outcomes that happened in the past. The time series is essentially a random series with very low confidence of being anywhere near the mean. This is the basis of The Flaw of Averages.
Where I work this would be a non-starter if we came to the Program Manager with this forecast of the Estimate to Complete based on an Average with that wide a variance.
Possible where there is low value at risk, a customer that has little concern for cost and schedule overrun, and maybe where the work is actually and experiment with no deadline or not-to-exceed budget, or any other real constraint. But if your project has a need date for the produced capabilities, a date when those capabilities need to start earning their keep and need to start producing value that can be booked on the balance sheet a much higher confidence in what the future NEEDS to be is likely going to be the key to success
The Primary Reason for Estimates
First estimates are for the business. Yes developers can use them too. But the business has a business goal. Make money at some point in the future on the sunk costs of today - the breakeven date. These sunk costs are recoverable - hopefully - so we need to know when we'll be even with our investment. This is how business works, they make decisions in the presence of uncertainty - not on the opinion of development saying we recorded our past performance on an average for projected that to the future. No, they need a risk adjusted, statistically sound level of confidence that they won't run out money before breakeven. What this means in practice is a management reserve and cost and schedule margin to protect the project from those naturally occurring variances and those probabilistic events to derail all the best laid plans.
Now developers make not think like this. But someone somewhere in a non-trivial business does. Usually in the Office of the CFO. This is called Managerial Finance and it's how serious money at risk firms manage.
So when you see time series like those in the original post, do your homework and show the confidence of the probability of the needed performance actually showing up. And by needed performance I mean the steering target used in the Closed Loop Control system used to increase the probability that the planned value - that the Agilest so dearly treasure - actually appears somewhere near the planned need date and somewhere around the planned cost so the Return on Investment those paying for your work are not disappointed with a negative return and label their spend as underwater.
So What Does This Mean in the End?
Even when you're using past performance - one of the better ways of forecasting the future - you need to give careful consideration of those past numbers. Averages and simple variances which wipe out the actual underlying time series variances - are not only naive, they are bad statistics used to make bad management decisions.
Add to the poorly formed notion that decisions can be made about future outcomes in the presence of uncertainty in the absence of estimates about that future and you've got the makings of management disappointment. The discipline of estimating future outcomes from past behaviors is well developed. The mathematics and especially the terms used in that mathematics are well established. Here's some source we use in our everyday work. These are not populist books, they are math and engineering. They have equations, algorithm, code examples. They are used used the value at risk is sufficiently high that management is on the hook for meeting the performance goals in exchange for the money assigned to the project.
If you work a project that doesn't care too much about deadlines, budget overages, or what gets produced other than the minimal products, then these books and related papers are probably not for you. And most likely Not Estimating the probability that you'll not over spend, show up seriously late, and fail to produce the needed capabilities to meet the Business Plans, will be just fine. But if you are expected to meet the business goals in exchange for the spend plan you've beed assigned, these might be a good place start to avoid being a statistic (dead skunk on the middle of the road) in the next Chaos Report (no matter how poorly the statistics are).
This by the way is an understanding I came to on the plane flight home this week. #Noestimates is a credible way to run your project when these conditions are in place. Otherwise you may what to read how to make credible forecasts of what the cost and schedule is going to be for the value produced with your customer's money, assuming they actually care about not wasting it.
- Time Series, 3rd Edition, Sir Maurice Kendall and J. Keith Ord
- Applied Regression Analysis, 3rd edition, Norman R. Draper and Harry Smith
- Introduction to Probability and Statistics, William Feller
- Estimating Software Intensive Systems: Projects, Products, and Processes, Richard D. Stutzke
- Forecasting Methods and Applications, Spyros Makridakis, Steven C. Whellwright, and Rob J. Hyndman.
The more traditional “waterfall” approach to project management, which all the major project frameworks such as PRINCE2®, APM BoK and PMBoK® came from, works well in stable contexts.
There is a clear case that the world we operate in since PRINCE2® was launched in 1996 is now far more volatile, uncertain, complex and ambiguous. Waterfall approaches that encourage thorough big design at the beginning are still relevant where we can be confident before work begins that requirements will not need to change significantly during the life of the project.
However, such are the volatility of operational drivers that bear upon businesses that often the customer simply must adapt. The urgency of these drivers will not allow them to wait until the end of the project. This might require frequent changes throughout a project. With a waterfall process this probably means expensive re-working of the plan and wasted effort.
That leads to our belief that taking the Agile approach of frequent small deliveries coupled with a more continuous conversation with the customer allows much greater flexibility. It delivers results, and therefore benefits, much more quickly.
The beauty of Agile is that customers can decide what they want to achieve as they see what the suppliers can achieve. It’s approach is one of ‘learning by doing’ allowing teams to reflect on their experiences as they go along and adapt accordingly.
4 key elements of Agile…
The success of Agile comes down to a number of key elements. To start with, you need a self-organising team. By moving away from silo working, team members are encouraged to use their overlapping skills and work together which in turn gives them much greater empowerment and satisfaction.
Then there’s ‘timeboxing’ where the emphasis is on fixing the time and cost elements of a project but also allowing the plan to evolve. Requirements can be prioritised with crucial input from a customer representative as work progresses. The Agile contract between customer and supplier is radically different to the expectations of waterfall; requirements are flexible within agreed parameters, but time and cost are not.
Third, Agile practice will usually keep ‘must haves’ to around 40% of the total effort – it’s always tempting to put too many priorities into the ‘must haves’ segment. Similarly, Agile teams also ensure there is only a finite number of tasks in the ‘doing’ category to help reduce the complexity of projects at any one time.
Finally, people engagement is a critical part of Agile working and is successful because the different stakeholders within a team work closer together and are empowered to have more say in both what they do and the order of the work. This is recognised as being much more motivational than the classic ‘command and control’ approaches which tend to be common among management.
…and 3 Agile myths
Of course, not everyone is ready to embrace Agile and one of the common misconceptions is that there is some sort of overall unifying Agile methodology.
That rather misses the point. There is no one right way of organising and managing an Agile project, and that’s what makes it so attractive to some and threatening to others.
Some try to adopt Agile techniques while at the same time continuing with a waterfall perspective, but as you might expect this is unlikely to deliver success – it is the Agile way of working that makes the techniques work, rather than the other way round.
Lastly, there are those who think Agile is only relevant to software development but that’s simply not true – it can equally be used on a variety of non-software examples such as renovating a large building, improving business processes or improving job aids for customer-facing personnel.
Barriers to Agile
One of the main reasons why Agile won’t work is if an organisation operates a culture of micromanagement and entrenched silos of working that won’t allow collaborative behaviours.
Other issues are likely to include weak team leadership or trying to implement it in organisations where the nature of the work is such that working releases is inconceivable in small iterations.
Why Agile is here to stay
Agile has made such significant inroads that it cannot be dismissed as a fad and should be understood by all managers who are involved in innovation and development. To ignore it now is to miss an opportunity that can deliver quick results and achieve cost savings.
Apple, Amazon, GE Healthcare and Salesforce.com are among those organisations already using Agile, having recognised that it is better suited to the complexities of 21st Century organisations. And above all, Agile knows how to get the best out of knowledge workers and ensure they stay motivated.
Faced with those conclusions – why wouldn’t you want to be more Agile?
The management of projects involves many things. Capabilities, Requirements, Development, Staffing, Budgeting, Procurement, Accounting, Testing, Security, Deployment, Maintenance, Training, Support, Sales and Marketing, and other development and operational processes. Each of these has interdependencies with other elements. Each operates in its own specific ways on the project. Almost all have behaviors described by probabilistic model driven by the underlying statistical processes.
Management in this sense is control in the presence of these probabilistic processes. And yes we can control these items - it's a well developed process, starting with Statistical Process Control, Monte Carlo Simulation of resulting, Bayesian Networks, Probabilistic Real Options and other methods based on probabilistic processes.
The notion that are not controllable is at its heart flawed and essentially misinformed. But this control requires information. It's been mentioned before about Closed Loop Control, Closed Loop versus Open Loop, Staying on Plan Means Closed Loop Control, Use and Misuse of Control Systems, and Why Project Management is a Control System.
All these lead to Five Immutable Principles of Project Success. Along with these Principles, comes Practices, and Processes. But it's the Principles we're after as a start.
We can make estimates from the data or models in some probabilistically informed manner. This is the role of estimating. To inform our decision making processes in the presence of uncertainty of possible future outcomes, knowing something about the past and present state of the system under management.
... provide needed capabilities to those paying for the project to meet some business goal or fulfill a mission strategy. To accomplish some beneficial outcome in exchange for the cost and time invested in development of the capabilities.
Without these estimates, we have no signal needed to take corrective.We have an Open Loop Control system. A system that takes any path it wants, it has no control mechanism to keep it on track.The open loop control system is a non-feedback system in which the control input to the system is determined using only the current state of the system and a model of the system. There is no feedback to determine if the system is achieving the desired output based on the reference input or set point. The system does not observe itself to correct itself and, as such, is more prone to errors and cannot compensate for disturbances to the system.This means we're going to get what we're going to get, with no chance to steer the system toward our desired outcome.
This month’s free project management template is an agenda template for a lessons learned meeting.
I’m a big fan of holding lessons learned meetings after a project and also as you go through a project. Learn what you can, when you can.
Lessons learned meetings (which you will also hear called post-implementation reviews or project post-mortems) are a bit different to other meetings. You’ll need a bespoke agenda for your meeting: the normal agenda that you use when you run a project team meeting won’t be enough.
Click the button to download the agenda template. It’s a .docx file.
IE9 users (and those on other old browsers): if you can’t see the button, leave a comment or email me and I’ll send you the document.
Email subscribers: If you’re looking at this article via an email you will have to go to the website to download the document. Here’s the link you need to get straight to the right page: http://www.girlsguidetopm.com/2015/08/free-template-lessons-learned-agenda/
Downloading this template, and any of my other free project management templates, will subscribe you to my newsletter. If you don’t want to get it, unsubscribe at any time. If you already do, you won’t get two copies. MailChimp is clever like that.
And finally: this template is free for you to use in your work but don’t sell it. That’s the only catch!
Let's start with a background piece on estimating. The Fermi Problem. A Fermi estimate is an order estimate of something. Not an order of magnitude (that's a 10X estimates, easy for anyone to make). These types of problems are encountered in physics and engineering education. From personal experience in oral exams where we were asked to estimate something quickly on the black board (yes the Black Board). Something like, what is the orbital velocity of a star with a specific mass composed of a specific set of fusion elements? You have 5 minutes young student, work quickly.
These back of the envelope calculations are well know exercises to show how to make estimates in the presence of uncertainty and with very little data in hand. This technique was named after Enrico Fermi for his ability to make good approximation with little or not actual data. These types of problems involve making justified guesses (not the types on uninformed guesses we see in many domains), with upper and lower variances.
A nice example is how many piano tuners are there in Chicago in 2009?
- There are approximately 9,000,000 people living in Chicago.
- On average, there are two persons in each household in Chicago.
- Roughly one household in twenty has a piano that is tuned regularly.
- Pianos that are tuned regularly are tuned on average about once per year.
- It takes a piano tuner about two hours to tune a piano, including travel time.
- Each piano tuner works eight hours in a day, five days in a week, and 50 weeks in a year.
With these assumptions, the number of piano tunings a year is approximately
- (9,000,000 people in Chicago / 2 people per house) x 1 piano per 20 houses x 1 tuner per piano per year = 225,000 tuning per year
- (50 weeks per year x 5 day per week x 8 hours a day) / 2 hours to tune = 1000 tunings per year per piano tuner
- (225,000 tunings per year) / 1000 tunings per year per tuner = 225 tuners in Chicago in 2009
- The actual number in 2009 was 290
This is similar to the Drake equation which estimates the number of intelligent civilizations in our galaxy. This approach by the way, may be one of the reasons estimating is seen as hard or even not possible by some. They missed those opportunities where estimating is taught.
What Does This Have to do with Project Management?
Estimation theory is a critical aspect of project management. When spending other peoples money in the presence of uncertainty we need to make decisions in the presence of this uncertainty. Estimation theory is a branch of statistics dealing with estimating values of parameters (numbers) based on measured/empirical data that have random values. The parameters describe the underlying physical process in a way that their values affect the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.
In the project world, we have three core variables - cost, schedule, and technical performance. These are interdependent, likely non-linear, and many times non-stationary (evolving in time). There is nice course at MIT OCW course on the Art of Approximation in Science and Engineering These back of an envelope estimates are critical to success in engineering and science. They are also critical to estimating in software development.
So when we hear it's hard or it's not even possible to estimate software development, don't believe it for a moment. Here's a butt simple way on How to Estimate Almost Any Software Deliverable.
The next thing we hear is estimates are the smell of dysfunction. And of course no dysfunctions are named, no root cause of the dysfunction named, and no corrective actions named - only stop estimating since estimates are evil, used as commitments, and misused to punish developers.
The Real Bottom Line
In business a framing assumption of managerial finance. This framing assumptions informs those of us on the business side of spending our money when making decisions. This is the basis of microeconomics of decision making.
When it is conjectured that decisions can be made in the presence of uncertainty in the absence of estimating to cost and impacts of those decisions without making estimates of those outcomes, we have to ask are those making those conjectures informed by any framework based in the process of business management? It appears not.
It is popular in some agile circle to use Waterfall as the stalking horse for every bad management practices in software development. A recent example is
Go/No Go decisions are a residue of waterfall thinking. All software can built incrementally and most released incrementally.
Nothing in Waterfall prohibits incremental release. In fact the notion of block release is the basis of most Software Intensive Systems development. From the point of view of the business capabilities are what they bought. The capability to do something of value in exchange for the cost of that value. Here's an example in health insurance business. Incremental release of features is of little value if those features don't work together to provide some needed capability to conduct business. A naive approach is the release early and release often platitude of some in the agile domain. Let's say we're building a personnel management system. This includes recruiting, on-boarding, provisioning, benefits signup, time keeping, and payroll. It's not be very useful to release the time keeping feature if the payroll feature was not ready.
So before buying into the platitude of release early and often ask what does the business need to do business? Then draw a picture like the one about, develop a Plan for producing those capabilities in the order they are needed to deliver the needed value. Without this approach, you'll be spending money without producing value and calling that agile.
That way you can stop managing other peoples money with Platitudes and replace them with actual business management processes. So every time you hear a platitude masking as good management, ask does that person using that platitude work anywhere that is high value at risk? No, then probably has yet to encounter that actual management of other peoples money
People who know me will understand why I love Sherlock Holmes. I watch enough procedural crime drama – the modern day TV equivalent – to be able to work out whodunnit long before the last commercial break.
The Arthur Conan Doyle novels that I read as a child are formulaic in a similar kind of way to the structure of an episode of CSI. Holmes has a particular approach to solving cases whether it’s why a bell rope goes nowhere or if the Baskerville family is actually cursed.
Humour me while I make the leap between what the great detective does and what project managers do on a day-to-day basis. There are similarities in how we approach project issues. Settle down with a pipe in your dressing gown and let me explain…
First, assemble the facts
Facts are important. Holmes meets the client and finds out what has happened. He will often go to the scene of the crime in person to review the facts in context. That gives him the opportunity to assess the situation in person instead of relying on someone else’s report of the issue.
The same applies for project managers. Use as much first-hand data as you can to work out what caused the problem. Don’t rely on the reports of others unless you have to, and certainly don’t rely solely on one person’s perspective when you have the chance to hear from others as well.
Gather your evidence and filter out opinions, keeping the facts clear.
Now, draw your conclusions
Facts in hand, Holmes will make deductions to assemble a picture of what happened. There are some humongous logical leaps in the stories but putting those aside, what you are left with is a set of conclusions.
Your thought process is likely to be different, and probably a lot more transparent. Sometimes you have to wait until the end of a Conan Doyle story before the pieces come together. In project management terms, you’ll want to build the backstory as you uncover more and more about the issue.
Make assumptions and draw conclusions about how the issue will affect the project. Deduce what impact it will have based on what you know. Consider the challenges it presents for the project schedule and budget, and don’t overlook anything that could be important.
Sherlock Holmes is the main character in the stories but he doesn’t work alone. Dr Watson, his trusted sidekick, is also involved with clients and in the investigations. Watson documents everything and provides the safe space that Holmes needs to explore ideas. Holmes also uses the Baker Street Irregulars, a gang of street children who move around London unnoticed by criminals and clients alike, reporting back what they find.
Issue management on projects isn’t something you can do alone. Typically you’ll talk about issues in your team meetings. Difficult problems might need some mind mapping sessions or dedicated discussions to identify possible solutions.
You definitely need Watson’s documentation skills. Get a free project management issue log template so you can track activity on your issues.
Finally, present your conclusions
When the case is solved, Holmes presents his solution to the people who need to know. Inspector Lestrade is a buffoon in the books most of the time, and this is a great literary technique: it forces Holmes to explain in detail what has happened and how he reached his conclusions for the benefit of the reader as well. Holmes will also present what needs to happen next: his recommendation for the course of action Lestrade should take, which is normally to arrest the bad guy.
Get approval from your sponsor before you take action on larger issues.
On a project you’ll be presenting the problem, your conclusions and a recommendation for further action. It’s normally a conversation you’ll have with the project sponsor or client. If it’s a small problem, you may keep it within the realms of the project team and once you’ve agreed on a way forward you’ll swing your plans into action without much involvement from others. For larger issues you will need approval from your sponsor to go ahead with your action plan.
From there it’s a simple job to get your project back on track – elementary!
All the work we do in the projects domain is driven by uncertainty. Uncertainty of some probabilistic future event impacting our project. Uncertainty in the work activities performed while developing a product or service.
Decision making in the presence of these uncertainties is a natural process in all of business.
The decision maker is asked to express her beliefs by assigning probabilities to certain possible states of the system in the future and the resulting outcomes of those states.
What's the chance we'll have this puppy ready for VMWorld in August? What's the probability that when we go live and 300,000 users logon we'll be able to handle the load? What's our test coverage for the upcoming release given we've added 14 new enhancements to the code base this quarter? Questions like that are normal everyday business questions, along with what's the expected delivery date, what's the expected total sunk cost, and what's the expected bookable value measured in Dead Presidents for the system when it goes live?
To answer these and the unlimited number of other business, technical, operational, performance, security, and financial questions, we need to know something about probability and statistics. This knowledge is an essential tool for decision making no matter the domain.
Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write - H.G. Wells
If we accept the notion that all project work is probabilistic, driven by the underlying statistical processes of time, cost, and technical outcomes, including Effectiveness, Performance, Capabilities, and all the ...ilities that manifest and determine value after a system is put into initial use. Then these conditions are the source of uncertainty and come in two types:
- Reducible - event based with a probability of occurrence within a specified time period.
- Irreducible - naturally occurring by a Probability Distribution Function of the variances produced by the underlying process.
If you don't accept this - that all project work is probabilistic in nature - stop reading, this Blog is not for you.
If you do accept that all project work is uncertain, then there are some more assumptions we need to make sense of the decision making processes. The term statistic has two definitions - one long ago and a current one. The long ago one means a fact, referring to numerical facts. A numerical fact as a measurement, a count, or a rank. This number can represent a total, an average or a percentage of several such measures. This term also applied to the broad discipline of statistical manipulation in the same way accounting applies to entering and balancing accounts.
Statistics in the second sense is a set of methods for obtaining, organizing, and summarizing numerical facts. These facts usually represent a partial rather than complete knowledge about a situation. For example the sample of the population rather than counting the entire population in the case of the census.
These numbers - statistics - are usually subjected to formal statistical analysis to help in our decision making in the presence of uncertainty.
In our software project world uncertainty is an inherent fact. Software uncertainty is likely much higher than in construction, since the requirements in software development are soft unlike the requirements in interstate highway development. But while the domain may have different variance in the level of uncertainty, estimates are still needed to make decisions in the presence of these uncertainties. Highway development has many uncertainties - none the least is the weather and weather delays.
When you measure what you are speaking about and express it in numbers you know something about it; but when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind - Lord Kelvin
Decisions are made on data. Otherwise those decisions are just gut feel, intuition, and at their core guesses. When you are guessing with other peoples money you have a low probability of keeping your job or the business staying in business.
... a tale told by an idiot, full of sound and fury, signifying nothing - Shakespeare
When we hear personal anecdotes about how to correct a problem and the conjecture that those anecdotes are applicable outside the individual telling the anecdote - beware. Without a test of any conjecture it is just a conjecture.
He uses statistics as a drunken man uses lampposts - for support rather than illumination - Andrew Lang
We many times confuse a symptom with the cause. When reading about all the failures in IT projects, and probability of failure, the number of failures versus success, there is rarely - in those naive posts on that topic - any assessment of the cause of the failure. The Root Cause analysis is not present. The Chaos Report is the most egregious of these.
There is no merit where there is no trial; and till experience stamps the mark of strength, cowards may pass for heroes, and faith for falsehood - A. Hill
Tossing out anecdotes, platitudes, and misquoted quotes does not make for a credible argument for anything. I knew a person that did X successfully, therefore you should have the same experience is common. Or just try it you may find it works for you just like it worked for me.
It seems there are no Principles or tested Practices in the approach to improving projects success. Just platitudes and anecdotes - masking chatter as process improvement advice.
I started to write a detailed exposition using this material for the #NoEstimates conjecture that decisions can be made without an estimate. But Steve McConnell's post is much better than anything I could have done. So here's the wrap up...
If it is conjectured that decisions, any decisions, some decisions, self selected decisions, can be made in the presence of uncertainty can be made without also making an estimate of the outcome of that decision, the cost of that decision, the impact of that decision without estimating - then let's hear how, so we can test it outside personal opinion and anecdote.
It's time for #NoEstimates advocates to provide some principle based examples of how to make decisions in the presence of uncertainty without estimating. Here these are populist books (Books without the heavy math), but still capable of conveying the principles of the topic can be a source of learning.
- Flaws and Fallacies in Statistical Thinking, Stephen K. Campbell, Prentice Hall, 1974
- The Economics of Iterative Software Development: Steering Toward Better Business Results, Walker Royce, Kurt Bittner, and Mike Perrow, Addison Wesley, 2009.
- How Not to be Wrong: The Power of Mathematical Thinking, Jordan Ellenberg, Penguin Press, 2014
- Hard Facts, Dangerous Half-Truths & Total Nonsense: Profiting from Evidence Based Management, Jeffery Pfeffer and Robert I. Sutton, Harvard Business School Press, 2006.
- How to Measure Anything, Finding the Value of Intangibles in Business, 3rd Edition, Douglas W. Hubbard, John Wiley & Sons, 2014.
- Standard Deviations: Flawed Assumptions, Tortured Data, and Other Ways Ways to Lie With Statistics, Gary Smith
- Center for Informed Decision Making
- Decision Making for the Professional, Peter McNamee and John Celona
Some actual math books on the estimating problem
- Probability Methods for Cost Uncertainty Analysis, Pau R. Garvey
- Making Hard Decisions: An Introduction to Decision Analysis, 2nd Edition, Robert T, Clemen, Duxbury Press, 1996.
- Estimating Software Intensive Systems, Richard D. Stutzke, Addison Wesley, 2005.
- Probabilities as Similarly Weighted Frequencies, Antoine Billot · Itzhak Gilboa · Dov Samet · David Schmeidler