Phillip Armour has a classic article in CACM titled "Ten Unmyths of Project Estimation," Communications of the ACM (CACM), November 2002, Vol 45, No 11. Several of these Unmyths are applicable to the current #NoEstimates concept. Much of the misinformation about how estimating is the smell of dysfunction can be traced to these unmyths.
Mythology is not a lie ... it is metaphorical. It has been well said that mythology is the penultimate truth - Joseph Campbell, The Power of Myth
Using Campbell's quote, myths are not untrue. They are an essential truth, but wrapped in anecdotes that are not literally true. In our software development domain a myth is a truth that seems to be untrue. This is Armour's origin of the unmyth.
The unmyth is something that seems to be true but is actually false.
Let's look at the three core conjectures of the #Noestimates paradigm:
- Estimates cannot be accurate - we cannot get an accurate estimate of cost, schedule, or probability that the result will work.
- We can't say when we'll be done or how much it will cost.
- All estimates are commitments - making estimates makes us committed to the number that results from the estimate.
The Accuracy Myth
Estimates are not numeric values. they are probability distributions. If the Probability Distribution below represents the probability of the duration of a project, there is a finite minim - some time where the project cannot be completed in less time.
There is the highest probability, or the Most Likely duration for the project. This is the Mode of the distribution. There is a mid point in the distribution, the Median. This is the value between the highest and the lowest possible completion times. Then there is the Mean of the distribution. This is the average of all the possible completion times. And of course The Flaw of Averages is in effect for any decisions being made on this average value †
“It is moronic to predict without first establishing an error rate for a prediction and keeping track of one’s past record of accuracy” — Nassim Nicholas Taleb, Fooled By Randomness
If we want to answer the question What is the probability of completing ON OR BEFORE a specific date, we can look at the Cumulative Distribution Function (CDF) of the Probability Distribution Function (PDF). In the chart below the PDF has the earliest finish in mid-September 2014 and the latest finish early November 2014.
The 50% probability is 23 September 2014. In most of our work, we seek an 80% confidence level of completing ON OR BEFORE the need date.
The project then MUST have schedule, cost, and technical margin to protect that probabilistic date.
How much margin is another topic.
But projects without margin are late, over budget, and likely don't work on day one. Can't be complaining about poor project performance if you don't have margin, risk management, and a plan for managing both as well as the technical processes.
- No individual work element is deterministic.
- Each work element has some type of dependency on the previous work element and the following work element.
- Even if all the work elements are Independent and sitting in a Kanban queue, unless we have unlimited servers of that queue, being late on the current piece of work will delay the following work.
So what we need is not Accurate estimates, we need Useful estimates. The usefulness of the estimate is the degree to which it helps make optimal business decisions. The process of estimating is Buying Information. The Value of the estimates, like all value is determined by the cost to obtain that information. The value of the estimate of the opportunity cost, which is the different between the business decision made with the estimate and the business decision made without the estimate. ‡
Anyone suggesting that simple serial work streams can accurately forecast - an estimate of the completion time - MUST read Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects using Monte Carlo Simulation, Troy Magennis.
In this book are the answers to all the questions those in the #NoEstimates camp say can't be answered.
The Accuracy Answer
- All work is probabilistic.
- Discover the Probability Distribution Functions for the work.
- If you don't know the PDF, make one up - we use -5% + 15% for everything until we know better.
- If you don't know the PDF, go look in databases of past work for your domain. Here's some databases:
But remember, making estimates is how you make business decisions with opportunity costs. Those opportunity costs are the basis of Microeconomics and Managerial Finance.
Cone of Uncertainty and Accuracy of Estimating
There is a popular myth that the Cone of Uncertainty prevents us from making accurate estimates. We now know we need useful estimates, but those are not prevented by in the cone of uncertainty. Here's the guidance we use on our Software Intensive Systems projects.
Finally in the estimate accuracy discussion comes the cost estimate. The chart below shows how cost is driven by the probabilistic elements of the project. Which brings us back to the fundamental principle that all project work is probabilistic. Modeling the cost, schedule, and probability of technical success is mandatory in any non-trivial project. By non-trivial I mean a de minimis project, one that if we're off by a lot it doesn't really matter to those paying.
The Commitment Unmyth
So now to the big bug a boo of #NoEstimates. Estimates are evil, because they are taken as commitments by management. They're taken as commitment by Bad Management, uninformed management., management that was asleep in the High School Probability and Statistics class, management that claims to have a Business degree, but never took the Business Statistics class.
So let's clear something up,
Commitment is how Business Works
Here's an example taken directly from ‡
Estimation is a technical activity of assembling technical information about a specific situation to create hypothetical scenarios that (we hope) support a business decision. Making a commitment based on these scenarios is a business function.
The Technical “Estimation” decisions include:
- When does our flight leave?
- How do we get there? Car? Bus?
- What route do we take?
- What time of day and traffic conditions?
- How busy is the airport, how long are the lines?
- What is the weather like?
- Are there flight delays?
This kind of information allows us to calculate the amount of time we should allow to get there.
The Business “Commitment” and Risk decisions include:
- What are the benefits in catching the flight on time?
- What are the consequences of missing the plane?
- What is the cost of leaving early?
These are the business consequences that determine how much risk we can afford to take.
Along with these of course is the risk associated with the uncertainty in the decisions. So estimating is also Risk Management and Risk Management is management in the presence of uncertainty. And the now familiar presentation from this blog.
Risk Management is how Adults manage projects - Tim Lister. Risk management is managing in the presence of uncertainty. All project work is probabilistic and creates uncertainty. Making decisions in the presence of uncertainty requires - mandates actually - making estimates (otherwise you're guess your pulling numbers from the rectal database). So if we're going to have an Adult conversation about managing in the presence of uncertainty, it's going to be around estimating. Making estimates. improving estimates, making estimates valuable to the decision makers.
Estimates are how business works - exploring for alternatives means willfully ignoring the needs of business. Proceed at your own risk
† This average notion is common in the No estimates community. Take all the past stories or story points and find the average value and use that for the future values. That is a serious error in statistical thinking, since without the variance being acceptable, that average can be wildly off form the actual future outcomes of the project
‡ Unmythology and the Science of Estimation, Corvus International, Inc., Chicago Software Process Improvement Network, C-Spin, October 23, 2013
As far as hypothesis are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it. Andreas Osiander's (editor) preface to De Revolutionbus, Copernicus, in To Explain the World: The Discovery of Modern Science, Steven Weinberg
In the realm of project, product, and business management we come across nearly endless ideas conjecturing to solve some problem or another.
Replace the word Astronomy with what ever word those conjecturing a solution will fix some unnamed problem.
From removing the smell of dysfunction, to increasing productivity by 10 times, to removing the need to have any governance frameworks, to making decisions in the presence of uncertainty without the need to know the impacts of those decisions.
In the absence of any hypothesis by which to test those conjectures, leaving a greater fool than when entering is the likely result. In the absence of a testable hypothesis, any conjecture is an unsubstantiated anecdotal opinion.
An anecdote is a sample of one from an unknown population
And that makes those conjectures doubly useless, because they can not only not be tested, they are likely applicable only the those making the conjectures.
If we are ever to discover new and innovative ways to increase the probability of success for our project work, we need to move far away from conjecture, anecdote, and untestable ideas and toward evidence based assessment of the problem, the proposed solutions and the evidence that the propsed correction will in fact result in improvement.
One Final Note
As a first year Grad student in Physics I learned a critical concept that is missing from much of the conversation around process improvement. When an idea is put forward in the science and engineering world, the very first thing is to do a literature search.
- Is this idea recognized by others as being credible. Are there supporting studies that confirm the effectiveness and applicability of the idea outside the authors own experience?
- Are those supporting the idea, themselves credible, or just following the herd?
- Are there references to the idea that have been tested outside the authors own experience?
- Are there criticisms of the idea in the literature? Seeking critics is itself a critical success factor in testing any ideas. There would be knock down drag out shouting matches in the halls of the physics building about an idea. Nobel Laureates would be waving arms and speaking in loud voices. In the end it was a test of new and emergent ideas. And anyone who takes offense to being criticized, has no basis to stand on for defending his idea.
- Is the idea the basis of a business, e.g. is the author selling something. A book, a seminar, consulting services?
- Has this idea been tested by someone else. We'd tear down our experiment, have someone across the country rebuild it, run the data and see if they got the same results.
Without some way to assess the credibility of any idea, either through replication, assessment against a baseline (governance framework, accounting rules, regulations), the idea is just an opinion. And like Daniel Moynihan says:
Everyone is entitled to his own opinion, but not his own facts.
and of course my favorite
Again and again and again — what are the facts? Shun wishful thinking, ignore divine revelation, forget what "the stars foretell," avoid opinion, care not what the neighbors think, never mind the unguessable "verdict of history" — what are the facts, and to how many decimal places? You pilot always into an unknown future; facts are your single clue. Get the facts! - Robert Heinlein (1978)
The other consideration when staffing the project is the type of information technology project, and obtaining people experienced in this type of project. If the kind of project is new to everyone or almost everyone on the project team, the risk of failure is large. Only by finding staff who have done it before and know the pitfalls for the target project type will be able to guide the rest of the team to a successful conclusion.
Another tip for project managers in achieving an on-time and on-budget project is to include contingency in the estimates – effort, schedule and budget contingency. Unfortunately even when this is done, the project manager may allocate the contingency into the initial project plans, and the contingency is lost—it will be used no matter what. To manage effectively, the contingency must be held by the project manager for unexpected complexities or risks. A best practice is to allocate hours of effort to each project team members excluding project contingency. In other words, team members understand that they are to deliver the project in the estimated hours without contingency. The project manager is then able to allocate hours to overtime, or to add time to the schedule if necessary, out of contingency, and still be within the approved budget.[ Members Only Content - please sign up to view it... ]
To be continued next week.
Brought to you by PMLink.com. Author: Miguel Ylareina.
There's a common notion in some agile circles the projects aren't the right vehicle for developing products. This is usually expressed by Agile Coaches. As a business manager, applying Agile to develop products as well as delivering Operational Services based on those products, projects are how we account for the expenditures of those outcomes, manage the resources and coordinate the needed resources to produce products as planned.
In our software product business, we use both a Product Manager and a Project Manager. These roles are separate and at the same time overlapping.
- Products are customer facing. Market needs, business models for revenue, cost, earnings, interfaces with Marketing and Sales and other business management processes- contracts, accounting - are Product focused.
Product Managers focus on Markets. What features are the market segments demanding? What features Must Ship and what featues can we drop? What is the Sales impacts of any slipped dates?
- Projects are internally facing - internal resources need looking after. The notion of self organizing is fine, but self directed only works when the work efforts have direct contact with the customers. And even then, without some oversight - governance - a self directed team has limitations in the larger context of the business. If the self directed team IS the business, then the need for external governance is removed. This would be rare in any non-trivial business.
Project Managers are inward focused to the resource allocation and management of the development teams. How can we get the work done to meet the market demand? When can we ship the product to maintain the sales forecast?
In very small companies and startups these roles are usually performed by the same person.
Once we move beyond the sole proprietor and his friends, separation of concerns takes over. These roles become distinct.
- The Product Manager is a member of the Business Development Team, tasked with the business side of the product delivery process.
- The Project Management Team (PMs and Coordinators, along with development leads and staff), is a member of the delivery team tasked with producing the capabilities needs to capture and maintain the market.
Products are about Whatand Why. Projects are about Who, How, When, and Where. From Rudyard Kipling's Six Trusted Friends)
Product Management focuses on the overall product vision - usually documented in a Product Roadmap, showing the release cycles of capabilities and features as a function of time. Project Management is about logistics, schedule, planning, staffing, and work management to produce products in accordance with the Road Map.
When agile says it's customer focused, this is true only when there is One customer for the Product, rather than a Market for the Product and that customer is on site. That'd not be a very robust product company if they had only one customer.
When we hear Products are not Projects, ask in what domain, business size, and value at risk is it possible not to separate these concerns between Products and Projects?
Risk Management is How Adults Manage Projects - Tim Lister
Let's start with some background on Risk Management
Tim's quote sets the paradigm for managing the impediments to success in all our endeavors
It says volumes about project management and project failure. It also means that managing risk is managing in the presence of uncertainty. And managing in the presence of uncertainty means making estimates about the impacts of our decision on future outcomes. So you can invert the statement when you hear we can make decisions in the absence of estimates.
Tim's update is titled Risk Management is Project Management for Grownups.
For those interested in managing projects in the presence of uncertainty and the risk that uncertainty creates, here's a collection from the office library, in no particular order
- DOD Risk Management Guide V7
- Software Engineering Risk Management
- Risk Happens
- Making Hard Decisions
- Effective Opportunity Management for Projects
- Effective Risk Management: Some Keys to Success
- Technical Risk Management
- Project Risk Management: Process, techniques and Insights
- Managing Risk: Methods for Software Systems Development
- SEI: Continuous Risk Management
Here's a summary at a recent meeting around decision making in the presence of risk
There are enough opinions to paper the side of a battle ship. With all these opinions, nobody has a straightforward answer that is applicable to all projects. There are two fundamental understanding though: (1) Everyone has a theory , (2) there is no singular cause that is universally applicable.
In fact most of the suggestions on project failures have little in common. With that said, I'd suggest there is a better way to view the project failure problem.
What are the core principles, processes, and practices for project success?
I will suggest there are three common denominators consistently mentioned in the literature that are key to a project’s success:
- Requirements management. Success was not just defined by well-documented technical requirements, but well-defined programmatic requirements/thresholds. Requirements creep is a challenge for all projects, no matter what method is used to develop the products or services from those projects. Requirements creep comes in many forms. But the basis for dealing with requirements creep starts with a Systems Engineering strategy to manage those requirements. Mist IT and business software projects don't know about Systems Engineering, and that's a common cause failure mode.
- Early and continuous risk management , with specific steps defined for managing the risk once identified.
- Project planning. Without incredible luck, no project will succeed without a realistic and thorough plan for that success. It's completely obvious (at least to those managing successful projects), the better the planning, the more likely the outcome will match the plan.
Of the 155 defense project failures studied in “The core problem of project failure,” T. Perkins, The Journal of Defense Software Engineering, Vol 3. 11, pp 17, June 2006.
- 115 – Project managers did not know what to do.
- 120 – Project manager overlooked implementing a project management principle.
- 125 – PMs allowed constraints imposed at higher levels to prevent them from doing what they should do.
- 130 – PMs do not believe the project management principles add value.
- 145 – Policies / directives prevented PMs from doing what they should do.
- 150 – Statutes prevented PMs from doing what they should do.
- 140 – PMs primary goal was other than project success.
- 135 – PMs believed a project management principle was flawed.
From this research these numbers can be summarized into two larger classes
- Lack of knowledge - the project managers and the development team did not know what to do
- Improper application of this knowledge - this start with ignoring or overlooking a core principle of project success. This cover most of the sins of Bad Management, from compressed schedules, limited budge, to failing to produce credible estimates for the work.
So where do we start?
Let's start with some principles. But first a recap
- Good management doesn't simply happen. It takes qualified managers – on both the buyer and supplier side, to appropriately apply project management methods.
- Good planning doesn’t simply happen. Careful planning of work scope, WBS, realistic milestones, realistic metrics, and a realistic cost baseline is needed.
- It is hard work to provide accurate data about schedule, work performed, and costs on a periodic basis. Constant communications and trained personnel is necessary.
Five Immutable Principles of Project Success
What capabilities are needed to fulfill the Concept of Operations, the Mission and Vision, or the Business System Requirements? Without knowing the answers to these questions, requirements, features, deliverables have no home. They have no testable reasons for being in the project.
What technical and operational requirements are needed to deliver these capabilities? With the needed capabilities confirmed by those using the outcomes of the project, the technical and operational requirements can be defined. This can be stated up front, or they can emerg as the project progresses. The Capabilities are stable, all other things can change as discovery takes place. If you keep changing the capabilities, you're going to be on a Death March project
What schedule delivers the product or services on time to meet the requirements? Do you have enough money, time, and resources to show up as planned. No credible project doesn't have a deadline and a set and mandated capabilities. Knowing there is sufficient everything on day one and every day after that is the key to managing in the presence of uncertainty.
- What impediments to success, their mitigations, retirement plans, or “buy downs are in place to increase the probability of success?" Risk Management is how Adults Manage Projects - Tim Lister is a go place to start. The uncertainties of all project work come in two type - reducible and irreducible. For irreducible we need margin. For reducible we need specific retirement activities.
What periodic measures of physical percent complete assure progress to plan? This question is based on a critical principle - how long are we willing to wait before we find out we're late? This period varies but what ever it is it must ve short enough to take corrective action to arrive as planned. Agile does is every two to four week. In formal DOD procurement, measures of physical percent complete are done every four weeks. The advantage of Agile is working products must be produced every period. Not the case in larger more formal processes.
With these Principles, here's five Practiuces that can put them to work
- Identify Needed Capabilities to achieve program objectives or the particular end state. Define these capabilities through scenarios from the customer point of view in units of Measure of Effectiveness (MoE) meaningful to the customer.
- Describe the business function that will be enabled by the outcomes of the project.
- Assess these functions be assessed in terms of Effectiveness and Performance.
- Define the Technical And Operational Requirements that must be fulfilled for the system capabilities to be available to the customer at the planned time and planned cost. Define these requirements in terms that are isolated from any implementation technical products or processes. Only then bind the requirements with technology.
- This can be a formal Work Breakdown Structure or an Agile Backlog
- The planned work is described in terms of deliverables.
- Describe the technical and operation Performance measures for each feature
- Establish the Performance Measurement Baseline describing the work to be performed, the budgeted cost for this work, the organizational elements that produce the deliverables from this work, and the Measures of Performance (MoP) showing this work is proceeding according to cost, schedule, and technical performance.
- Execute the PMB’s Work Packages in the planned order, assuring all performance assessments are 0%/100% complete before proceeding. No rework, no transfer of activities to the future. Assure every requirement is traceable to work and all work is traceable to requirements.
- If there is no planned order the work processes will be simple.
- This is a rarely on any enterprise or non-trivial project, since the needed capabilities usually have some sequential dependencies. Accept Produce Purchase Request before issuing Purchase Order.
- Perform Continuous Risk Management for each Performance–Based Project Management® process area to Identify, Analyze, Plan, Track, Control, and Communicate programmatic and technical risk.
The integration of these five Practices are the foundation of Performance–Based Project Management®. Each Practice stands alone and at the same time is coupled with the other Practices areas. Each Practice contains specific steps for producing beneficial outcomes to the project, while establishing the basis for overall project success.
Each Practice can be developed to the level needed for specific projects. All five Practices are critical to the success of any project. If a Practice area is missing or poorly developed, the capability to manage the project will be jeopardized, possibly in ways not know until the project is too far along to be recovered.
Each Practice provides information needed to make decisions about the majority flow of the project. This actionable information is the feedback mechanism needed to keep a project under control. These control processes are not impediments to progress, but are the tools needed to increase the probability of success.
Why All This Formality, Why Not Just Start Coding, Let Customer Tell Us To Stop?
All business works on managing the flow of cost in exchange for value. All business has a fiduciary responsibility to spend wisely. Visibility to the obligated spend is part of Managerial Finance. Opportunity Cost is the basis of Microeconomics of decision making.
The 5 Principles and 5 Practices are the basis of good business management of the scarce resources of all businesses.
This is how adults manage projects
When confronted with making decisions on software projects in the presence of uncertainty, we can turn to an established and well tested set of principles found in Software Engineering Economics.
First a definition from Guide to the Systems Engineering Body of Knowledge (SEBoK)
Software Engineering Economics is concerned with making decisions within the business context to align technical decisions with the business goals of an organization. Topics covered include fundamentals of software engineering economics (proposals, cash flow, the time-value of money, planning horizons, inflation, depreciation, replacement and retirement decisions); not for-profit decision-making (cost-benefit analysis, optimization analysis); estimation, economic risk and uncertainty (estimation techniques, decisions under risk and uncertainty); and multiple attribute decision making (value and measurement scales, compensatory and non-compensatory techniques).
Engineering Economics is one of the Knowledge Areas for educational requirements in Software Engineering defined by INCOSE, along with Computing Foundations, Mathematical Foundations, and Engineering Foundations.
A critical success factor for all software development is to model the system under development as holistic, value-providing entities have been gaining recognition as a central process of systems engineering. The use of modeling and simulation during the early stages of the system design of complex systems and architectures can:
- Document system needed capabilities, functions and requirements,
- Assess the mission performance,
- Estimate costs, schedule, and needed product performance capabilities
- Evaluate tradeoffs,
- Provide insights to improve performance, reduce risk, and manage costs.
The process above can be performed in any lifecycle duration. From formal top down INCOSE VEE to Agile software development. The process rhythm is independent of the principles.
This is a critical communication factor - separation of Principles, Practices, and Processes, establishes the basis of comparing these Principles, Practices, and Processes across a broad spectrum of domains, governance models, methods, and experiences. Without a shared set of Principles, it's hard to have a conversation.
Developing products or services with other peoples money means we need a paradigm to guide our activities. Since we are spending other peoples money, the economics of that process is guided by Engineering Economics.
Engineering economic analysis concerns techniques and methods that estimate output and evaluate the worth of products and services relative to their costs. (We can't determine the value of our efforts, without knowing the cost to produce that value) Engineering economic analysis is used to evaluate system affordability. Fundamental to this knowledge area are value and utility, classification of cost, time value of money and depreciation. These are used to perform cash flow analysis, financial decision making, replacement analysis, break-even and minimum cost analysis, accounting and cost accounting. Additionally, this area involves decision making involving risk and uncertainty and estimating economic elements. [SEBok, 2015]
The Microeconomic aspects of the decision making process is guided by the principles of making decisions regarding the allocation of limited resources. In software development we always have limited resources - time, money, staff, facilities, performance limitations of software and hardware.
If we are going to increase the probability of success for software development projects we need to understand how to manage in the presence of the uncertainty surrounding time, money, staff, facilities, performance of products and services and all the other probabilistic attributes of our work.
To make decisions in the presence of these uncertainties, we need to make estimates about the impacts of those decisions. This is an unavoidable consequence of how the decision making process works.
The opportunity cost of any decision between two or more choices means there is a cost for NOT choosing one or more of the available choices. This is the basis of microeconomics of decision making. What's the cost of NOT selecting an alternative?
So when it is conjectured we can make a decision in the presence of uncertainty without estimating the impact of that decision, it's simply NOT true.
That notion violates the principle of Microeconomics
The popular notion that Cynefin can be applied in the software development domain as a way of discussing the problems involved in writing software for money is missing the profession of Systems Engineering. From Wikipedia Cynefin is...
The framework provides a typology of contexts that guides what sort of explanations or solutions might apply. It draws on research into complex adaptive systems theory, cognitive science, anthropology, and narrative patterns, as well as evolutionary psychology, to describe problems, situations, and systems.
While Cynefin uses the term Complexity and Complex Adaptive System, it is applied from the observational point of view. That is the system exists outside of our influence on the system to control its behavior - we are observers of the systems, not engineers of the system.
There are certainty engineered systems that transform into complex adaptive systems with emergent behaviors that cause the system to fail. Example below. But for small teams working on small projects, this is not likely to be the case when engineering principles are applied.
In the agile community it is popular to use the terms complex, complexity, complicated, complex adaptive system many times interchangeably and many times wrongly. These terms are many times overloaded with an agenda used to push a process or even a method. As well in the agile community it is popular to claim we have no control over the system so we must adapt to its behaviors. This is likely the case in one condition - the chaotic behaviors of Complex Adaptive Systems. But this is only the case when we fail to establish the basis for how the CAS was formed.
It is highly unlikely those working in the agile community actually work on complex systems that evolve AND at the same time are engineered at the lower levels. They've simply let the work and the resulting outcomes emerge and become Complex, Complicated, and create Complexity. They are observers of the outcomes, not engineers of the outcomes.
Here's one example of an engineered system that did become a CAS despite all the efforts of the Systems Engineers. I worked on Class I and II sensor platforms. Eventually FCS was canceled for all the right reasons. But for small teams of agile developers the outcomes become complex when the Systems Engineering processes are missing. Cynefin partitions beyond obvious emerge for the most part when Systems Engineering is missing.
First some definitions
- Complex - consisting of many different and connected part. Not easy to analyze or understand. Complicated or intricate. When a system or problem is considered complex, analytical approaches, like dividing it into parts to make the problem tractable is not sufficient, because it is the interactions of the parts that make the system complex and without these interconnections, the system no longer functions.
- Complex System - is a functional whole, consisting of interdependent and variable parts. Unlike conventional systems, the parts need not have fixed relationships, fixed behaviors or fixed quantities, and their individual functions may be undefined in traditional terms.
- Complicated - containing a number of hidden parts, which must be revealed separately because they do not interact. Mutual interaction of the components creates nonlinear behaviors of the system. In principle all systems are complex. The number of parts or components is irrelevant n the definition of complexity. There can be complexity - nonlinear behaviour - in small systems of large systems.
- Complexity - there is no standard definition of complexity is a view of systems that suggests simple causes result in complex effects. Complexity as a term is generally used to characterize a system with many parts whose interactions with each other occur in multiple ways. Complexity can occur in a variety of forms
- Complex behaviour
- Complex mechanisms
- Complex situations
- Complex systems
- Complex data
One more item we need is the types of Complexity
- Type 1 - fixed systems, where the structure doesn't change as a function of time.
- Type 2 - systems where time causes changes. This can be repetitive cycles or change with time.
- Type 3 - moves beyond repetitive systems into organic where change is extensive and non-cyclic in nature.
- Type 4 - are self organizing where we can combine internal constraints of closed systems, like machines, with the creative evolution of open systems, like people.
And Now To The Point
When we hear complex, complexity, complex systems, complex adaptive system, pause to ask what kind of complex are you talking about. What Type of complex system. In what system are you applying the term complex. Have you classified that system in a way that actually matches a real system. Don't take anyone saying well the system is emerging and becoming too complex for us to manage Unless in fact that is the case after all the Systems Engineering activities have been exhausted. It's a cheap excuse for simply not doing the hard work of engineering the outcomes.
It is common use the terms complex, complicated, and complexity interchanged. And software development is classified or mis-classified as one or the both or all three. It is also common to toss around these terms with not actual understanding of their meaning or application.
We need to move beyond buzz words. Words like Systems Thinking. Building software is part of a system. There are interacting parts that when assembled, produce an outcome. Hopefully a desired outcome. In the case of software the interacting parts are more than just the parts. Software has emergent properties. A Type 4 system, built from Type 1, 2, and 3 systems. With changes in time and uncertainty, modeling these systems requires stochastic processes. These processes depend on estimating behaviors as a starting point.
The understanding that software development is an uncertain process (stochastic) is well known, starting in the 1980's  with COCOMO. Later models, like Cone of Uncertainty made it clear that these uncertainties, themselves, evolve with time. The current predictive models based on stochastic processes include Monte Carlo Simulation of networks of activities, Real Options, and Bayesian Networks. Each is directly applicable to modeling software development projects.
 Software Engineering Economics, Barry Boehm, Prentice-Hall, 1981.
Constructing a credible Integrated Master Schedule (IMS) requires sufficient schedule margin be placed at specific locations to protect key deliverables. One approach to determining this margin is the use of a Monte Carlo simulation tool.
This probabilistic margin analysis starts with the construction of a “best estimate” Integrated Master Schedule with the work activities arranged in a “best path” network.
While there may be “slack” in some of the activities, the Critical Path exists through this network for each Key Deliverable. This network of activities must show how each deliverable will arrive on or before the contractual need date. This “best path” network is the Deterministic Schedule – the schedule with fixed activity durations.
By assigning a duration variance for each class of work activity, the Monte Carlo model shows if the at what confidence level the probabilistic delivery date occurs on or before the deterministic date. The needed schedule margin for each deliverable can be derived by the Monte Carlo simulation. This activity network is referred to as the Probabilistic Schedule – the schedule with activity durations of random variables.
With the schedule margin inserted in front of each deliverable, the Deterministic schedule becomes the basis of the Probabilistic schedule. Next is a cycle of adjusting the Deterministic schedule to assure the needed margin produces the final baselined Deterministic schedule to be placed on baseline. As the program proceeds, this schedule margin is managed through a “margin burn down” process. Assessing the sufficiency of this margin for the remaining work is then part of the monthly program performance report.
Here's an example from an upcoming workshop on building and executing a credible Performance Measurement Baseline based on the Wright Brother's work
For this to work we need several things:
- The work to be performed. This can be a network of activities in a schedule. It can be a collection of activities in a sprint. In both cases we need some approximation of how long it will take to accomplish the work. In both cases these means making an estimate of the Most Likely duration or work effort to produce the needed outcomes.
- This Most Likely value can come from many sources. But it does need to be the Most Likely, not the average, not some made up number, not some cockamamie guess
Here's how to use a Monte Carlo tool for determining the likelihood of completing on or before a given date, when there is a schedule of the work with Most Likelies for the work durations and the variances in those durations
When I hear a post like:
Two things come to mind:
- All project work is probabilistic. There is no such thing as a deterministic estimate. OK, there is. But those estimates a wrong, dead wrong, willfully ignorant wrong. All project work is probabilistic. If you're making deterministic estimates, you've chosen to ignore the basic processes of probability and statistics.
- There is an important difference between Statistics and Probability. Both are needed when making decisions in the presence of uncertainty.
All projects have uncertainty.
And there are two kinds of uncertainty on all projects. Reducible and Irreducible.
Reducible uncertainty (on the right) is described by the probability of some outcome. There is an 82% probability that we'll be complete on or before the second week in November, 2016. Irreducible uncertainty (on the left) is described by the Probability Distribution Function (PDF) for the underlying statistical processes.
In both cases estimating is required. There is no deterministic way to produce an assessment of an outcome in the presence of uncertainty without making estimates. This is simple math. In the presence of uncertainty, the project variables are random variables, not deterministic variables. If there is no uncertainty, not need to estimate, just measure.
When we hear that #NoEstimates is about empirical data used to forecast the future, let’s look deeper into the term and the processes of empiricism.
First, an empiricist rejects the logical necessity for scientific principles and bases processes on observations. 
While managing other people’s money in the production of value in exchange for that money, there are principles by which that activity is guided. For empiricist principles are not immediately evident. But principles are called principles because they are indemonstrable and cannot be deduced form other premises nor be proved by any formal procedure. They are accepted they have been observed to be true in many instances and to be false in none.
Second, with empirical data comes two critical assumptions that must be tested before that data has any value in decision making.
- The variances in the sampled data is sufficiently narrow to allow sufficient confidence in forecasting the future. A ±45% variance is of little use. Next is the killer problem.
- With an acceptable variance, the assumption that the future is like the past must be confirmed. If this is not the case, that acceptably sampled data with the acceptable variance is not representative of the future behavior of the project.
Understanding this basis of empiricism is critical to understanding the notion of making predictions in the presence of uncertainty about the future.
Next let’s address the issue of what is an estimate. It seems obvious to all working in engineering, science, and financial domain that an estimate is a numeric value or range of values for some measure that may occur at sometime in the future.Making up definitions for estimate or selecting definition outside of engineering, science, and finance is disingenuous. There is no need to redefine anything.
Estimation consists of finding appropriate values (the estimate) for the parameters of the system of concern in such a way that some criterion is optimized. 
The estimate has several elements:
- The quantity for the estimate – a numeric value we seek to learn about.
- The range of possible values for that quantity
- For estimates that have a range of values, the probability distribution of the values in the range of values. The Probability distribution function for the estimated values. The range of values is described by the PDF, with a Most Likely, Median, Mode, and other cummulants – that is what’s the variance of the variance?
- For an estimates that has a probability of occurrence, the single numeric value for that probability and the confidence on that value. There is an 80% confidence of completing the project on or before the second week in November, 2005
Now when those wanting to redefine what an estimate is to support their quest to have No Estimates, like redefining forecasting as Not Estimating, it becomes clear they are not using any terms found in engineering, science, mathematics, or finance. When they suggest there are many definitions of an estimate and don’t provide any definition, with the appropriate references to that definition, it’s the same approach as saying we’re exploring for better ways to …. It’s a simple and simple minded approach to a well established discipline and making decisions and fundamentally disingenuous. And should not be tolerated.
The purpose of a cost estimate is determined by its intended use, and its intended use determines its scope and detail.
Cost estimates have two general purposes:
- To help managers evaluate affordability and performance against plans, as well as the selection of alternative systems and solutions,
- To support the budget process by providing estimates of the funding required to efficiently execute a program.
- The notion of defining the budget leaves open the other two random variables of all project work – productivity and performance of the produced product or service.
- So suggesting that estimating is no needed when the budget of provided, ignores these two are variables.
Specific applications for estimates include providing data for trade studies, independent reviews, and baseline changes. Regardless of why the cost estimate is being developed, it is important that the project’s purpose link to the missions, goals, and strategic objectives and connect the statistical and probabilistic aspects of the project to the assessment of progress to plan and the production of value in exchange for the cost to produce that value.
The Need to Estimate
The picture below, with apologies for Scott Adams, is typical of the No Estimates advocates who contend estimates are evil and need to be stopped. Estimates can’t be done. Not estimating results in a ten-fold increase in project productivityor some vague unit of measure.
 Dictionary of Scientific Biography, ed. Charles Coulston Gillespie, Scribner, 1073, Volume 2, pp. 604-5
 Forecasting Methods and Applications, Third Edition, Spyros Makridakis, Steven C. Whellwright, and Rob J. Hayndman
Some More Background
- Introduction to Probability Models, 4th Edition, Sheldon M. Ross
- Random Data: Analysis and Measurement Procedures, Julius S. Bendat and Allan G. Piersol
- Advanced Theory of Statistics, Volume 1: Distribution Theory, Sir Maurice Kendall and Alan Stuart
- Estimating Software Intensive Systems: Projects, Productsm and Processes, Richard D. Stutzke
- Probability Methods for Cost Uncertainty Analysis: A Systems Engineering Perspective, Paul R. Garvey
- Software Metrics: A Rigorous and Practical Approach, Third Edition, Norman Fenton and James Bieman
Information Technology (IT) projects are usually run by project managers who use processes such as scope, schedule, budget and quality management to get the job done. These skills are indeed prerequisite to success in completing a project, but other things are needed to deliver complex information technology projects in today’s fast-moving pressure cooker environment.
What do IT project managers need to do in order to be successful? This article examines key challenges to successful delivery of information technology projects, and gives tips, strategies and risk mitigations that can be used to structure and manage a project successfully.[ Members Only Content - please sign up to view it... ]
To be continued next week.
Brought to you by PMLink.com. Author: Miguel Ylareina.
On the way home last week from a program managers conference, was listening to Bob Dylan's Idiot Wind
Everything's a little upside down, matter of fact the wheels have stopped. What’s good is bad, what’s bad is good. Idiot Wind, Bob Dylan, Blood on the Tracks, Copyright 1978
Remindeds me of the current discourse of #NoEstimates
- Estimates are Bad, #NoEstimates are Good - you can get out from under the oppression of deadlines, bad managers, and commitments by making decisions with No Estimates.
- #NoEstimates are good, they can increase your productivity ten fold. That's 10X. That's an order of magnitude increase.
- Forecasts are #NoEstimates and that's good. Estimating is not the same as forecasting. Estimating is bad since we can't possibly determine what's going to happen in the future. But we can forecast what's going to happen in the future and call that #NoEstimates.
- Commitments are bad, commitments result from estimates, and that is bad. Commitments ruin the collaborative aspects of the project and that is bad. No committing to each other for a shared outcome is good.
- Making decisions with No Estimates is good, asking when we'll be done and how much it will cost is Bad.
- Knowing when you're wrong is Good, determining the probabilistic confidence of all estimates, updating those estimates with new data from performance and emerging uncertainty, Bad.
- Making predictions about the future using past performance and calling that #NoEstimates Good. Using past performance, adjusted for future possibilities and calling that Estimating, Bad.
- It's bad to have a backlog of needed work, estimating that backlog is more bad. Revising the backlog with updated information is the worst Bad. Having no ability to know if you can meet the need date for the needed cost is Good.
- Changes for 50% of the requirements in the Backlog is Good, because it's just the way it is. Managing the Backlog like an adult and considering changes on the need date and cost, is Bad.
- Comparing yourself to Kent Beck is Good and being called crazy is Good. Ignoring there are 100's of 1,000's of people applying Kent Beck's processes. This is called the Galileo Gambit.
- Focusing on Value is Good. Asking what's the cost to produce that Value and when that Value will be available for use so we can start recovering that cost is Bad.
- Estimating means you're getting married to a Plan, is Bad. Ignoring that when Plans change - and they always do since a Plan is a Strategy for success - the new Estimate to Complete does not need to be updated is Good.
The more those in the #NoEstimating community try to convince others that Estimating is Bad, can't be done, results in a smell of dysfunction, the more Bob Dylan resonants.
We’re idiots, babe
It’s a wonder we can even feed ourselves
We in the management of other peoples money domain must be, since we must have missed the suspension of the Microeconomics of Software Development when making decisions. We must have missed the suspension of Managerial Finance applied when we're asked to be stewards of the money our customers have given us to provide value for the needed cost on the needed date. We must have missed the suspension of the need to know when and how much so our Time Phased Return on Investment doesn't get a divide by Zero error.
The development of software in the presence of uncertainty is a well developed discipline, a well developed academic topic, and a well developed practice with numerous tools, database, and models in many different SW domains.
Economics is the study of how resources (people, time, facilities) are used to produce and distribute commodities and how services are provided in society. Engineering economics is a branch of microeconomics dealing with engineering related economic decisions. Software Engineering Foundations: A Software Science Perspective, Yingxu Wang, Auerbach Publications.
Software engineering economics is a topic that addresses the elements of software project costs estimation and analysis and project benefit-cost ratio analysis. As well these costs, and the benefits from expending those costs, produce tangible and many times intangible value. The time phased aspects of developing software for money, means we need to understand the scheduling aspects of producing this value.
All three variables in the paradigm of software development for money - time, cost, and value - are random variables. This randomness comes from the underlying uncertainties in the processes found in the development of the software. These uncertainties are always there, they never go away, they are immutable.
Economic Foundations of Software Engineering
There are fundamental principles and methodologies utilized in engineering economics and their applications in software engineering that form the basis of decision making gin the presence of uncertainty. These formal economic models include the cost of production, and market models based on fundamental principles of microeconomics. The dynamic values of money and assets, patterns of cash flows, can be modeled in support of managements need to make decisions in the presence the constant uncertainties associated with software development
Economic analysis methodologies for engineering decisions include project costs, benefit-cost ratio, payback period, and rate of return can be rigorously described. This is the basis of any formal treatment of economic theories and principles. Software engineering economics is based on elements of software costs, software engineering project costs estimation, economic analyses of software engineering projects, and the software maintenance cost model.
Economics is classified into microeconomics and macroeconomics. Microeconomics is the study of behaviors of individual agents and markets. Macroeconomics is the study of the broad aspects of the economy, for example employment, export, and prices on a national or global scope.
A universal quantitative measure of commodities and services in economics is money.
Engineering economics is a branch of microeconomics. There are some basic axioms of microeconomics and engineering economics.
- Demand versus supply. Demand is the required quantities for a product or service. It is also the demand for labor and materials needed to produce those products and services. Demand is a fundamental driving force of market systems and the predominant reason for most economic phenomena. The market response to a demand is called supply.
Supply is the required quantities for a product or service that producers are willing and able to sell at a given range of prices. This also extends to the labor and materials needed to produce the product and services to meet the demand.
Demands and supplies are the fundamental behaviors of dynamic market systems, which form the context of economics. Not enough Java programers in the area, cost for Java programmers goes up. Demand for rapid production of products, cost of skilled labor, special tools and processes goes up. COBOL programmers in 1998 to 2001 could ask nearly any price for their services. FORTRAN 77 programs here in Denver can get exorbitant rates to help maintain the Ballistic Missile Defense System when a local defense contractor was awarded the maintenance and support contract for Cobra Dane.
Making Decisions in the Presence of Uncertainty
Making decision is about Opportunity Costs
Opportunity Costs are those cost resulting from the loss of potential gain from the other alternatives then the one alternative chosen by the decision maker.
Every time we make a decisions involving multiple choices we are making an opportunity cost based decisions. Since most of the time these costs in in the future and are uncertainty, we need to estimate those opportunity costs as well as the probability that our choice is the right choice to produce the desired beneficial outcomes.
Here's an example from a tool we use, Palisade software's Crystal Ball. There are similar plug in for Excel (RiskAmp is affordable for the individual).
Another useful tool in the IT decision making world is Real Options. Here's a simple introduction to RO's and decision making.
- The Stakes — The stakes are involved in the decision, such as costs, schedule, delivered capabilities and those impacts on business success or the meeting the objectives.
- Complexity — The ramifications of alternatives are difficult to understand the impact of the decision without detailed analysis.
- Uncertainty — Uncertainty in key inputs creates uncertainty in the outcome of the decision alternatives and points to risks that may need to be managed.
- Multiple Attributes — Larger numbers of attributes cause a larger need for formal analysis.
- Diversity of Stakeholders — Attention is warranted to clarify objectives and formulate performance measures when the set of stakeholders reflects a diversity of values, preferences, and perspectives.
Reducible and Irreducible Uncertainty
All project work is probabilistic driven by underlying statistical processes that create uncertainty.  There are two types of uncertainty on all projects. Reducible (Epistemic) and Irreducible (Aleatory).
Aleatory uncertainty arises from the random variability related to natural processes on the project - the statistical processes. Work durations, productivity, variance in quality. Epistemic uncertainty arises from the incomplete or imprecise nature of available information - the probabilistic assessment of when an event may occur.
There is pervasive confusion between these two types of uncertainties when discussing the impacts on these uncertainties on project outcomes, including the estimates of cost, schedule, and technical performance.
All The World's a NonLinear, Non-Stationary Stochastic Process, Described by 2nd Order non-Linear Differential Equations.
In the presence of these conditions - and software development is - we need to understand several things for success. What are the coupled dynamics? What are the probabilistic and statistical processes that drive these dynamics? And how can we make decision in their presence?
Predictive Analytics of Project Behaviors
In the presence of uncertainty, the need to predict future outcomes is critically important. One of the professional societies I belong to has a presentation o this topic. Here's a small sample of a mature process for estimating future outcomes given past performance. If you backup the URL to http://www.iceaaonline.com/ready/wp-content/uploads/2015/06/ You'll see all the briefings on the topic of cost, schedule, and performance management used in the domains I work.
 Risk Informed Decision Making Handbook, NASA/SP-2010-576 Version 1.0 April 2010.
 "Risk-informed decision-making in the presence of epistemic uncertainty," Didier Dubois, Dominique Guyonnet, International Journal of General Systems, Taylor & Francis, 2011, 40 (2), pp.145-167.
Vendor: Paymo LLC
Hosting options: Web only
Cost and plans: $4.95/user/month. On top of that, there’s an invoice add-on available for paying customers that allows you to generate unlimited invoices, estimates and expenses for $9.95/month. 15 day free trial.
Languages: 18 options, one of the most generous I’ve ever seen!
Currency: Loads. I didn’t count them all, but there are probably about 18 to match the language options.
Basic features: starting a project
None of the software I’ve reviewed this year makes it difficult to start a new project, and Paymo is no exception.
Use the big ‘Add Project’ button. Type in the project description, add the client info (I think this is mandatory), set a colour. There is a nice range of greens so I can choose one to match my corporate colours.
You can then add tasks. I wasn’t massively impressed that each task has to be added to a task list. I wasn’t sure why I would want to set up new task lists, so I created one to find out. It just displays the tasks grouped together on the screen. You could use a task list to group or categorise tasks such as by the person who is doing the work or by project stage.
I did like that you don’t have to click too much to move to the next task. If you have lots of tasks to enter you can do them quickly, type the task name and click Add Task to get another one going straight away. I am a big fan of not having too many mandatory fields. You can mark a task as complete from within the task details or the task list.
You don’t have to click save or enter on lots of the fields in the system, which is very slick. Once it’s on the screen, it’s done.
You can link milestones to a task list but not to a task. And I don’t believe it’s a dynamic link – it’s not going to flash up that you are missing that milestone if you don’t complete all your tasks on time.
You can’t add dependencies between tasks, at least not that I could see, and there’s no Gantt chart.
Paymo is really aimed at small and medium sized businesses which provide services to others. Everything to do with a project relates back to the client.
The reporting feature gives you lots of options, but essentially they only report on time spent on the project (and the associated cost of this). These sorts of reports are handy for giving your clients or working out what to bill clients, but perhaps less useful for working out whether your internal team is over-resourced.
The good thing about the reports is that you can have them either static or live, so you can produce real-time reports or snapshots depending on what you need.
The dashboard is where you’d go for your other data, and it looks very smart. You can’t drill down into the dashboard, which is a shame. The dashboard picture in the help and on the Paymo website looks much better than the dashboard I had available, and not just because it had a ton more data in it. There obviously are additional widgets available but I wasn’t sure how to add them to the dashboard.
This is what the press kit dashboard image looks like; what you could have if you had more data and more widgets:
There is a full timesheet feature, with plenty of colourful ways to display the data. You can pick to view the data as lists, or month to view/week to view with different colours to highlight your different projects (hence the reason for choosing a project colour).
Integrations and mobile
Paymo has an API, plus apps for Apple devices and one you can download from Google Play. Then there are desktop widgets for Windows and Mac that let you easily track time on tasks and update your projects without having to have to go to the website all the time. There’s also Zapier integrations, so it’s really flexible.
Tailoring it for you
One of the best things about Paymo is how much you can tailor it to suit you. Add photos of your team. Add a logo. Pick your colours. And best of all, you can change the date format so you don’t have to have the month first! UK readers rejoice.
It might seem picky, but these little things make the difference between a totally smooth experience and a happy user and one who uses it because they have to and thinks it’s just OK.
If you run a consultancy, if you offer services to other businesses, if you are a freelancer, then this would work for you. It’s slick, well-designed and targeted to a particular market.
If you need a Gantt chart and more advanced project management features like task dependencies then this isn’t a product that you’d benefit from. Equally if you are just looking for a task management system then you would do better finding something else. The power of Paymo is in the integrated invoicing, financial reporting, billing and client management. If you didn’t need that then you’d be wasting much of the functionality, in my view, as that’s what it does best.
Right to Reply
I shared this review with Laurentiu at Paymo and he commented:
There are actually 74 currency options.
Grouping by task lists is mainly for project stages/types of jobs and not for users (users can be added to each task). For example, a project “Web Design” that has been split in tasks lists and tasks and users assigned to tasks:
-online research (Joe)
-meeting with clients (Mary)
-compile report (Joe)
– front end (Chris)
– back end (John)
You can also hit Enter to add another task (after you’ve added one) instead of moving your hand from the keyboard to the mouse. You can set reminders.
We’re working right now to add a Gantt chart. It should be available at the beginning of 2016. We’re also working to add Kanban boards and more planning features.”
Full disclosure: Paymo has been one of my blog sponsors this year but I have not been paid for this review. Just thought you’d like to know!
In the for Profit world, revenue from the sale of the product or service minus the cost to produce that revenue is profit (in a general form). In the non-profit world, earnings are needed to fulfill the mission of the firm, so profit per se is not the goal of those providing the product or service in exchange for the cost to do that. I work in both profit and non-profit domains. In both domains, the cost to produce the value needs to be covered by income from some source.
In both domains, the writing of software used by the customer is our primary cost. Those customers pay us for the software, we pay the employees that produce the software. Those customers have an expectation that the software will meet their needs in terms of capabilities, performance, effectiveness, and may other ...ilities in support of their business or mission.
These expectations come with forms.
- Arrival Date - When can we expect the receive the latest fixes to the code, for the defects we identified and turned into you 3 weeks ago? In what release?
- A Capability to Do Something - Those features you spoke about at the User Conference, when do you expect we'll be able to get a look at them in Beta form, so we can see how they will impact our business process?
- The Ability to Manage Change - We just received notification that our customer (the customer of the customer) will be switching to a new security payment token system. when can you validate that your system will be complaint with that notification?
- A forecasted cost and schedule - We've just been awarded a contract to provide features that we think you can provide in your product. Do you have a product roadmap showing when the needed features in the attached RFP and contract award document will be ready for use in the system we are proposing to our new customer?
These types of questions are the norm for all businesses that convert money into products or services. Whether we're bending metal into money or typing on keyboards to produce money the core principles of converting that money into more money is the same.
These business processes require making decisions in the presence of uncertainty.
There is a discipline for this process. It is Operations Research. This is how UPS defines the routes of the trucks everyday, how the local dairy plans the production run for milk and gets it delivered as planned, how airlines plan and execute todays routing with the right crews, fuel, working hardware, how roads are built, how high rises go up, how Target gets the goods to the store, and wait for it how software and hardware are built and delivered on a planned schedule for a planned cost to meet the planned needed capabilities of those paying for those products, when all the processes to do this have probabilistic behaviors.
Those conjecturing that these decisions can be made without estimates need to provide a testable example that does not violate the principles of microeconomics of decision making and the managerial finance governance processes of their business
How would the opportunity cost decisions, Net Present Value decision (a calculation that compares the amount invested today to the present value of the future cash receipts from the investment.), Economic Value Added (is an estimate of a firm's economic profit), is an estimate of a firm's economic profit, being the value created in excess of the required return of the company's investors created in excess of the required return of the company's investors be made without an estimate of the future outcome of that decision.
Without these making those conjecture and even selling seminars on how to make decision without estimates have no way to be tested in an actual business environment. Tested by those actually paying for the work. No conjectured by those spending the money from those paying for the work.
You think that because you understand one that you must therefore understand twobecause one and one make two. But you forget that you must also understand and - Sufi teaching story
The elements of a system, the software system being built, are the easiest parts to recognize. They're visible and tangible because they're in the backlog and scheduled for development. If we look closer - and accept the fact that these elements have interactions with each other - there is more to the solution than a pile of stories being implemented as slices of larger elements of needed capabilities.
The intangibles of the system, the interactions between these elements (slices), the realtime behaviors that produce or consume data, the emergent behaviors of the system resulting from the evolution of the system state from the systems execution are also critical to success.
If we only consider the sliced elements of the system, there is no end to the process. How small is too small? What is the appropriate size of the slice? Not from an effort point of view, but from a systems point of view? But more importantly what are the interactions between the sliced elements? This is dependent on the slices and their interfaces. This dependent on the interconnections, the relationships that hold the sliced elements together.
Without considering these interconnections and the dependencies on the sliced points - this is a Cut Set Optimization Problem - simply saying slicing provides benefits to estimating and execution has no actual basis in practice.
Here's how to tell the difference between an actual systems view and just a pile of sliced work:
- Can we identify the actual parts of the system? To the Atomic level. Not atoms of course, be actual standalone parts, whose further decomposition (slicing) adds no value and in fact may create more complexity.
- Do we know how these parts - the sliced work outcomes - affect each other? Both statically and dynamically?
- Do we know how these parts - the sliced work outcomes - produce effects that are different than the effect produced by themselves as standalone parts?
- Do we know how this integrated effect behaves over time to meet the actual needed capabilities the system is supposed to provide to those paying the bills?
Many of the interconnections in the system operate through the flow of information. This information holds the system together and enables the system to operate as needed.
Slicing is only useful if it answers to the questions above and most importantly those sliced parts fit in the overall structure of the system - the system architecture, both static and dynamic - to statically and dynamically provide the customer with the needed capabilities at the needed time, for the needed cost, and deliver the needed performance and effectiveness from those capabilities.
The least obvious part of any system - its function or purpose - is often the most crucial determinate of the system's behavior and its resulting success - Thinking in Systems: A Primer, Donella H. Meadows.
Take care so as to not fall under the siren song of simple approaches.
I have yet to see any problem, however complicated, which, when looked at in the right way, did not become more complicated - Poul Anderson, quoted in Arthur Koestler, The Ghost in the Machine.
Care when slicing to make sure you have an understanding of the system, the interaction of the elements, and the outcomes of those interactions that you don't break the topological structure needed to assure the proper flow of value to those paying for your work.
Today in our annual Software September feature I’m talking to Christian Kotzbauer, Managing Director of Genius Project. He has been with Genius Project for over 14 years and is based out of the company’s office in Germany.
I talked to Christian about his role and the future of project management software. So without further ado:
Hello Christian.What does a Managing Director of a software company do all day?
On a daily basis I am involved in managing and following up on sales objectives, which requires prioritising customer, employee and organisational requirements. I am sort of like the bridge between our Marketing and our Sales department; my goal is to make sure their activities are aligned to deliver maximum results.
I’m also heavily implicated in developing our organisation’s strategy and goals and communicating it to our staff, which means I am in touch with our multinational offices every day. To have a strong handle on our strategy, I must also take part in overseeing the design, marketing and quality of our product. I always have to be thinking ahead as to what improvements we can make both at an organisational level and on a product delivery level.
Wow, that sounds busy, and more wide-ranging than I expected.Genius is often nominated for awards and has a successful track record. What has been your proudest moment at the company?
We’ve had a lot of monumental moments at Genius, but I would have to say that our recent Top 100 Award in Germany was my proudest moment; to be more specific, we won the Top-Innovator Excellence Award.
I had the honour of accepting the accolade on behalf of the organisation from mentor Ranga Yogeshwar. The room was filled with leaders from the largest industrial companies, which as you may know, are the key drivers for the German economy. Needless to say, we were all proud that Genius Project could be recognised amongst such an influential group of leaders.
Congratulations!What changes are you noticing in your user base now that we have a lot of people in the workplace who have never known a world without Facebook?
The new generation of employees entering the workforce demand cool, simple and easy to use tools that are accessible from anywhere. As an innovative company, we are quick to note changes in user behaviours and to respond to them.
Three years ago we introduced our own social collaboration software on the market, which we have continuously expanded and improved on over the years. Although we are accelerating this trend and support it with new features in each version, we are keen to ensure that all of our users have the option to adopt this trend at their own pace.
Talking of trends, where do you think the project management software industry is going to be in 5 years?
People will continue to demand access to their tools from anywhere around the world at anytime. Privacy and security issues need to be addressed before it can fully take off, especially for organisations and countries that have a compelling need to protect their data.
I think the challenge that most vendors have is that their tools and features are powerful but not always simple, especially for the new workforce. In my opinion, the software industry is definitely moving towards simplicity and accessibility, but with an increasing demand for powerful features.
Organisations are responding to these demands in new and innovative ways, such as cloud, mobile and social computing. We can see even in Gartner’s Magic Quadrant that more and more companies, including Genius Project are investing in cloud-hosted deployment of their software, because it is an inevitable shift which we must all adapt to.
People will continue to demand access to their tools, from anywhere around the world at anytime. However, I do strongly believe that privacy and security issues need to be addressed before it can fully take off, especially for organisations and countries that have a compelling need to protect their data.
Finally, we will continue to see more software being written by a group of contributors, not only a company. This is the direction we are moving in: more collaborative, more social and more accessible.
I’d definitely agree with that.What new features have you added to the software recently and why do you think they will make a difference to project managers in this new world?
We’ve made several additions to the software but some of the more interesting ones include a new dashboard pane that was added in GeniusLive! (our social collaboration platform); the new pane displays a user’s tasks and To Dos so that they can access most of their project work without referring to the standard menu navigation.
Genius Project has also integrated with Outlook, enabling automatic synchronisation between Genius Project and Outlook, as well as viewing Genius Project “feed” in Outlook with Tasks and To Dos.
Another great feature that we added is the ability to send and receive email within Genius Project. This allows users to extend their communication with external people who do not have access to the Genius Project system. It is possible to share any “Generic” records or documents via email by simply using the standard email “Forward” action within Genius Project. Users will also receive notifications in GeniusLive! when they receive email responses, they can manage email threads which will be stored in the system and leverage this function to push emails from programs such as Outlook and Gmail to Genius Project.
We’ve also enhanced our Gantt Chart (Genius Planner). The enhanced capabilities include rich text description field of the tasks within the Gantt Chart and the option to define task properties in Genius Planner.
OK, sowhat is next for Genius?
We will be launching a new release this coming fall, Genius Project version 8.0, where we have
completely revamped the user interface to simplify navigation, increase performance and improve usability.
Due to high demand from clients and prospects we further improved the Agile and What-If Analysis features. We wanted to ensure that our product is lightning fast in this product release, as a reflection of our mission to improve project efficiency.
Great, thanks for sharing those thoughts!
Full disclosure: Genius Project has been one of my blog sponsors this year but in case you were wondering I was not paid for this interview.
It is popular to use several references to the estimating problem that are three to four decades old
- A Software Metrics Survey
- Analysis of Empirical Software Estimation Models
- SOFTWARE ENGINEERING Report on a conference sponsored by the NATO SCIENCE COMMITTEE, Garmisch, Germany, 7th to 11th October 1968
Much has happened in the last 3 to 4 decades to increase to accuracy and precision of software development efforts.
- Databases - of past performance
So when we hear there is a problem with estimating the the basis of that claim is 30 to 40 year old reports, we need to be skeptical at best. When those claims are used to sell a book, a workshop, and entire idea, then some serious questions need to be asked.
Any understanding at all of the current software estimating techniques as applied with tools and databases to modern systems, not 40 year old FORTRAN systems?
While there are huge issues with estimating any complex emergent system, identification of the the root cause of the problem has not been done by those conjecturing that Not Estimating is the solution. This Root Cause Analysis has been done for modern complex systems and it has been found to be one of three sources.
The principles of cost and schedule estimating, assessment of the related technical and programmatic gaps are the same in all domains for every scale. From small to billion. Why? Because it's the same problem no matter the scale.
- We didn't know
- We didn't do our homework
- We ignored what others have told us
- We ignored the past performance in the same domain
- We ignored the past performance in other domains
- We just weren't listening to what people were telling us
- Our models of cost and schedule growth were bogus, unsound, did not consider the risks, or we just made them up
- We couldn't know
- We didn't have enough time to do the real work needed to produce a credible estimate
- We didn't have sufficient skills and experience to produce a credible estimate
- We didn't understand enough about the problem to have our estimate represent reality
- We choose not to ask the right questions
- We choose not to listen
- We choose not to do our home work. or worse choose not to do our job
- Since we're spending other peoples money we've decided it's not our job to know something about how much and when you'll be done to some level of confidence. We'll let someone else do that for us and we'll use their estimates in our work.
- We didn't want to know
- "You can't handle the truth," as Jack Nicholson character Col. Nathan Jessep's so clearly stated below in the clip for A Few Good Men.
- As the political risk and consequences of the project increase this process becomes more common.
But here's the way out of the trap for at least (1) and (2)
- We didn't know
- Do your homework. Look for reference classes for the work you're doing.
- Come up with an estimate based on credible processes. Wide Band Delphi, 20 questions, lots of ways out there to narrow the gap on the upper and lower bounds on the estimate
- We couldn't know
- Bound the risks with short cycle deliverables.
- This is called agile
- It's also called good engineering as practiced in many domains, from DOD 5000.02 to small team agile development
- We don't want to know
- Well there's no way out of those short of being King.
So take care when you hear about problems in the past, the long ago, possibly longer before those conjecturing the problem and the solution were born.
Something that is rarely talked about with project management but is all too common is the need for project managers to look after multiple concurrent projects. When an organization has invested in the skills of a project manager and they prove their worth, this often results in the paradox of that individual being given more than one project to handle and thereby making it more difficult for them to achieve future success. However, all is not lost. It is possible to manage multiple projects at the same time without dropping the ball. Here is a strategy to get you through it.
Divide your day
One of the best ways in which you can comfortably juggle multiple projects is to review your time management approach and how your work gets completed on a day by day and week by week basis.
[ Members Only Content - please sign up to view it... ]
Brought to you by PMLink.com. Author: Lauren Lambie.
In agile there is a mnemonic INVEST. This term is one of those Holy Grails that is never subject to assessment within the agile community. I had a hands on experience with an agile tools vendor when we were selecting tools for a DOD program. When speaking with the guru's at the tool vendor, we mentioned multiple resources assigned to a single task and interdependence of the tasks and their deliverables.
You'd thought the devil himself had walked into the room. In systems there are always interdependencies and the work requires multiple skills working together on those interdependencies.
A reminder of INVEST
- I=Independent - The user story should be self-contained, in a way that there is no inherent dependency on another user story.
- N=Negotiable - User stories, up until they are part of an iteration, can always be changed and rewritten.
- V=Valuable - A user story must deliver value to the end user.
- E=Estimable - You must always be able to estimate the size of a user story.
- S=Small - User stories should not be so big as to become impossible to plan/task/prioritize with a certain level of certainty.
- T=Testable - The user story or its related description must provide the necessary information to make test development possible.
The ironies here are those suggesting that pure agile doesn't not require estimating seemed to have missed INVEST.
But here's the issue...
In our domain, we work on systems. Others may work on a bunch of stuff. Here's how to tell the difference
- Can you identify the parts?
- Do these parts affect each other?
- Do the parts together produce an effect that is different from the effect of each part on its own?
- Does the effect, the behavior over time, persist in a variety of circumstances?
If the I in INVEST is in fact true, then you're likely working in a bunch of stuff not a system. Bunch of stuff is likely de minims in ways systems are not.
You think that because you understand "one" that you must therefore understand "two," because one and one make two. But you forget that you must also understand "and" - Sufi teaching story
The notion of decomposing the work - slicing - into small chunks needs to be tested against the systems requirements to also develop and manage the interconnections between these sliced chunks of work. Interconnections in a tree system are the physical floes and chemical reactions that govern the tree's metabolic processes. Similar interconnections occur in software systems.
Slicing work below the level of these interconnections of the system elements looses sight of the interdependencies and therefore looses sight of the system.
Literally you can't see the forest for the trees.
It is the management of these interdependencies that is the Critical Success Factor for increasing the probability of success for the project. Be very careful falling for the holy grail of slicing, without also maintaining visibility to the system it's operations and the interdependencies between all the elements and all the work needed to produce these elements.
Vendor: Whiteboard, LLC
Hosting options: iPhone/iPad app or website only. No information available on how much it will cost, although they will start charging for it eventually.
Managing your task list
Whiteboard is a task management app. It only does that, so for large projects it wouldn’t be suitable.
From the app it is easy to add a new task. Simply tap on the big + button, give your task a name, set the due date and when you want to be reminded and then hit Save. You can add comments to tasks. I like the fact that on the website you can change the due date of a task to today with one click on the calendar icon.
Your default home page is set up with two workspaces: Home and Work, the idea being that you can use the app to keep track of everything going on in your life.
Your activity stream is probably where you’d start if you had many items on the go and wanted to see what had happened recently. Otherwise your To Do panel shows your active tasks and what you’ve completed.
Other features: team workspaces
You can set up team workspaces so that you can share tasks with a group. Once you’ve create a team you can set up a team workspace. Within a workspace you can add To Do items and upload documents. There is an activity log for the workspace too.
It’s very attractive and easy to use, especially the app version. Slick and clean, it lets you manage your tasks on the go. You can personalise the colours of workspaces, and add photos for team members.
However, the app version is slow to load and it’s far too easy to delete data by accident. Going back cancels what you’ve just done and this was frustrating.
Team collaboration features
Email notifications can be set up or switched off if you like, which is always a good feature. There are also push notifications to the app if that’s your preferred way of getting updates.
You can attach documents to tasks which always helps with collaboration. I didn’t try the app version, but it’s good to know that I could get it on my tablet if I wanted.
I personally don’t like the whole premise of one app to organise your work and home life. It blurs the line between home and work. I don’t want the reminder to return the library books popping up during work-related conference calls.
I don’t think this will tear me away from To Do lists in a notebook, but I can see it working very well for small teams managing multiple small projects, who perhaps work independently and virtually for a lot of the time.
As task management tools go, it’s very good and it’s currently free. If you think you might be in the market for a task management app then it could well be worth signing up now, in case they offer discounts to people already on the list when they start charging.
I am rarely the person directly in charge of the business itself (CEO, CIO, CTO). Department yes (PMO, DIR) whole business no. I work for CEO's, CIO's, Program Managers, Policy Directors. What I have learned from all these leaders is both simple and complicated.
They have a hard headed view of how business works. Revenue comes in. Cost to produce that revenue is known ahead of time. Surprises in this cost are not welcome. Everyone talks to each other in probabilistic numbers. Accounting speaks in single point values. Business people and finance people speak in probability and statistics.
All the worlds's a random process, evolving, impacted by externalities outside the control of the process, non-linear interactions among the components of the system.
Decision making in the presence of these conditions requires several attributes for success:
- What are the underlying behaviors of the system in terms of statistical processes? Are the processes stationary or are they time dependent or dependent on some other processes?
- What are the parameters of the system that are first order impacts on the decisions?
- In the presence of naturally occurring and event based uncertainties, business decision makers depend on estimates of the outcomes of their decisions. To make a decision in the presence of uncertainty means assessing the probabilistic outcomes of that decision.
- By definition, if you are making decisions in the presence of uncertainty, you are estimating the outcomes. If you're not estimating, or have redefined what you're doing as #NoEstimates when in fact it is estimating, then the only thing left is guessing. And even guessing is a 50/50 estimating technique at worst.
Managers are not confronted with problems that are independent of each other, but with dynamic situations that consist of complex systems of changing problems that interact with each other. I call such situation messes .... Managers do not solve problems, they manage messes - Russell Ackhoff, "The Future of Operational Research is Past," Journal of Operational Research Society, 30, No. 2 (February 1979), pp. 93-104
It is popular in some parts of the agile community to use Water Fall as the boggy man for all things wrong with the management of software projects. As one who works in the Software Intensive Systems domain on System of Systems programs, Waterfall is an approach that was removed from our guidance a decade and a half ago. But first some definitions from the people who actually invented waterfall, not those critical of the process and unlikely having accountability for showing up on time, on budget, on spec.
- Waterfall Approach: Development activities are performed in order, with possibly minor overlap, but with little or no iteration between activities. User needs are determined, requirements are defined, and the full system is designed, built, and tested for ultimate delivery at one point in time. A document-driven approach best suited for highly precedented systems with stable requirements.
- Incremental Approach: Determines user needs and defines the overall architecture, but then delivers the system in a series of increments (“software builds”). The first build incorporates a part of the total planned capabilities; the next build adds more capabilities, and so on, until the entire system is complete.
- Spiral Approach: A risk-driven controlled prototyping approach that develops prototypes early in the development process to specifically address risk areas followed by assessment of prototyping results and further determination of risk areas to prototype. Areas that are prototyped frequently include user requirements and algorithm performance. Prototyping continues until high risk areas are resolved and mitigated to an acceptable level.
- Evolutionary Approach: An approach that delivers capability in increments, recognizing up front the need for future capability improvements. The objective is to balance needs and available capability with resources and to put capability into the hands of the user quickly.
The first criticism of Waterfall came from Dr. Winston Royce, "Managing the Development of Large Software Systems," Proceedings. IEEE WESCON, August 1970. pages 1-9, Originally published by TRW. Notice in the paper design iterations.
Royce’s view of this model has been widely misinterpreted: he recommended that the model be applied after a significant prototyping phase that was used to first better understand the core technologies to be applied as well as the actual requirements that customers needed!
TRW (where Royce worked) was an early adopter of Iterative and Incremental Development (IID) originated by Dr. Barry Boehm in the mid 1980's. The first work in IID programs was taking place in the mid 1970s. A large and successful program using of IID at IBM Federal Systems Division was the USA Navy helicopter-ship system LAMPS. A 4-year 200 person-year effort involving millions of lines of code. This program was incrementally delivered in 45 time boxed iterations (1 month per iteration).
The project was successful: "Every one of those deliveries was on time and under budget" in, "Principles of Software Engineering," Harlan Mills, IBM Systems Journal, Vol 19, No 4, 1980. Where he says ...
The basic idea behind iterative enhancement is to develop a software system incrementally, allowing the developer to take advantage of what was being learned during the development of earlier, incremental, deliverable versions of the system. Learning comes from both the development and use of the system, where possible. Key steps in the process were to start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving sequence of versions until the full system is implemented. At each iteration, design modifications are made along with adding new functional capabilities.
Many in the agile community use these words, perhaps without ever have read the 1980 description of how complex software intensive systems were developed at IBM FSD and TRW.
In 1976 Tom Glib stated:
"Evolution" is a technique for producing the appearance of stability. A complex system will be most successful if it is implemented in small steps and if each step has a clear measure of successful achievement as well as a "retreat" possibility to a previous successful step upon failure. You have the opportunity of receiving some feedback from the real world before throwing in all resources intended for a system, and you can correct possible design errors.
The Incremental Commitment Spiral Model is applied in Software Intensive Systems in the DOD. But agile development has entered the domain in 2014 with the connections between Earned Value Management and Agile Development.
Without an understanding of the history of development life cycles, many - most recently the #NoEstimates community - use Waterfall as the stalking horse for all things wrong with software development other than there approaches.
So when you hear the Red Herring Fallacy that Waterfall is the evil empire, ask if the person making that claim has done his homework, worked any Software Intensive System of Systems, or has experience being accountable for the delivery of mission critical, can't fail systems? Probably not. Just personal anecdotes yet again.
What’s new with project management software vendors? Here’s the roundup of news from across the PM software industry.
Smartsheet saves client $132k
Smartsheet announced last month that one of their customers, consultancy Centric Digital, had saved 40 hours a week on client projects because of switching to Smartsheet. That’s a huge story, and it saves the company around $132,000 a year. But you have to ask, how were they so inefficient before?
Brian Manning, president and chief digital officer at Centric Digital, explained: “We want to do as little project management administration as possible so we can focus our efforts on delivering solutions to clients, not managing a tool,” he said. The company was previously using spreadsheets because (so says the press release) “traditional project management tools were too arduous and bulky, and wouldn’t fit with his team’s agile style of working.”
Smartsheet also brought home three awards from the Microsoft Worldwide Partner Conference this summer, including being named “Best Office 365 App”.
MeisterTask launches more Kanban-inspired automation
MeisterTask, one of the best tools you’ve never heard of, launched new features allowing users to automate recurring steps in the workflow. It might only save you a few seconds per task, but when you’ve got hundreds of project activities it all adds up. Plus, who wants to schedule the same thing over and over again?
Over the next few months MeisterTask will launch more automated actions and integration with tools like Zapier so you should be able to manage the task workflow through various apps from start to finish.
ProjectManager.com rolls out new integrations and API
It looks like Zapier could be the in thing, as ProjectManager.com has also just announced its partnership with the integration engine. Zapier enables you to set up automated actions, on a ‘when this happens do this’ basis. For example, if you upload a file to Dropbox Zapier can automatically sync it to your project in ProjectManager.com.
They have also launched a new API and an online Developer Center to help you manage all the connections and integrations between your project management tools and other systems. Better workflow all around.
Quire now lets you share with external teams
I’m a big fan of transparency and that includes making information available to your external team members like contractors and suppliers. But you don’t want to share confidential information with them. Task management Quire now makes it possible to safely allocate tasks to people outside your company with the ‘external team’ feature.
You see the whole picture, they only see what they need to see and everyone can manage their tasks easily. I predict more and more tools will have to offer secure access to limited functionality for external teams – it’s nice to see some vendors have already realised this is a necessit
Clarizen records best results in company history
Clarizen, which bills itself as collaborative work management software but you’d know it as a project management tool, has reported record results. It closed Q2 and billed it as the best quarter in company history, growing across all regions from a mix of expansion within existing customers and winning new business.
The company opened a new office in London earlier this year, on Regent’s Street, no less, one of the most sought-after addresses in the capital where The Crown Estate (owned by the Sovereign) holds almost the entire freehold.
“Our growth would not be possible without the partnership of our customers and their belief in collaborative work management,” said Avinoam Nowogrodski, CEO, Clarizen. “They inspire our continued commitment to providing a ‘white glove’ customer experience and delivering first-class customer service to new, existing and potential customers.”
LiquidPlanner launches new version
LiquidPlanner launched a major new version last month, LiquidPlanner @Work. The update is designed to make it possible to do project work and ‘ordinary’ work from the same tool. It gives you a private to do list and the ability to capture ad-hoc work. When those jobs turn into more effort than you expected you can convert them to tasks and add them to the project plan.
It’s the result of 6 months of research watching how customers use the product and learning what challenges teams are facing when using software. It’s great to know that companies spend this much time and effort working out what users want.
It is amazing how many project management software vendors don’t put news on their website, especially when I know many of them release new features on a rolling schedule. Guys, I can help write your press releases! That’s my job. So I expect there’s a lot more happening in the software world that we’ll never hear about.
Full disclosure: some of the companies mentioned here are amongst my clients but no one has paid to feature in this round up. If I missed out something great that is happening with your favourite tool, let me know in the comments!
Sometimes, a project manager must deal with a disgruntled client. They may be unhappy with one thing or many things, but if their feelings are strong enough to make a complaint then that complaint needs to be properly and fairly dealt with.
Unfortunately for project managers, complaints are not a rarity in the world of project management. It can seem very unfair to get criticism on a project when in the majority of cases a lot of hard work has gone into getting it successfully delivered. However, the reality is that you may be working with a client that is not happy with what they see. How do you deal with this situation in a way that is suitable for both parties?[ Members Only Content - please sign up to view it... ]
2013 © Copyright PMLink.com. All rights reserved.
Vendor: Celoxis Technologies Pvt. Ltd
Hosting options: Web ($25 per user per month) or on premise ($450 per user)
Languages: English, Spanish, German, Portuguese, French, Simplified Chinese
Currency: Any, add your own symbol
Basic features: getting your project started
Well, there isn’t much basic about Celoxis. I last reviewed it in 2012.
It’s a fully-featured system meaning it would be capable of managing the largest of enterprise projects. The sophistication means it will take you a while to set up, adding in all the parameters and criteria for your company, team and project.
It’s simple to get your projects set up but you still have to add a deadline as a mandatory field, even though you haven’t actually planned your project at that point so you won’t know the scheduled finished date.
Adding tasks is easy, but when I wanted to make one task the dependency of another it didn’t automatically shift the dates. That’s not necessarily a bad thing – dependencies don’t always mean that the tasks happen in chronological order. But then I set it how I wanted, moved the date of the first task and the dependent task didn’t shift forward.
I didn’t have the settings right, and I know there is a setting to change it to auto-schedule. But I spent 10 minutes looking for it, plus searched in the help (“From grid, select the manually scheduled column – I couldn’t find the grid view, let alone the right column) and I still couldn’t find it again. Where is the Options tab?
You’d figure this out if you used it regularly. It’s not a complaint, just a comment on the level of training users would require. It’s not intuitive, but you do get a free administrator course when you sign up, so someone in the department would have all the skills necessary to answer those questions. Towards the end of my trial I started to use the ‘Recent’ menu which lets you quickly navigate to the areas of the tool that you’ve used recently.
The risk register is nice. It calculates the risk score and allows you to drill down into the risk detail. It presents a clean summary and I like that it’s an integral part of the system.
What I thought could be slicker was creating risks. From the project’s main menu you add a new risk. Once you’ve entered the details in the pop-up the risk appears on the screen – great. But there is no way from this page to add another risk. You have to navigate back to the main project homepage and then go through the process again. It’s a tiny thing, and I expect it wouldn’t bother me after a while but the user flow through highly-trafficked areas of a system are something that are always worth looking at.
I love that it has a calendar view. You can add events directly to the calendar and decide who can see them – great for team meetings, upcoming presentations, launch day and so on. You can also switch on displaying tasks and projects so you can see all forthcoming work on the calendar too. This would be very handy for people less confident with Gantt charts or for discussing the work for the month with your team.
And you can export in iCal format so you can sync the Celoxis calendar with Outlook or other apps.
It looks nice. The new user interface is clean and professional. Aside from the learning curve it’s easy to use.
There are still no apps for smartphones or tablets but you can access it via a browser. If you wanted to stay on top of things while you were away from the office that, and email integration, should keep you going.
Discussions can be attached to your current project or another project. You can choose to share them with the client or keep them internal and you can add documents.
You can set up workspaces so that individual teams are able to segregate who can see what. There are granular security permissions which let you share information with other groups including clients, making sure they only have access to the data you want them to see.
It integrates with email and sends out alerts for upcoming tasks. In fact, it integrates with a lot of things including Salesforce, Google Drive and LDAP out of the box, plus it has an API for integration with products like SharePoint.
There is also an activity stream, which shows you everything that has happened on the project and lets you click to go there – very handy.
Other features: dashboards and reports
The first screen you see is your dashboard. It gives you an overview of the tasks your team members are currently working on, plus graphs to show task status and whether your projects are on track (or not). Click the graph to drill down into the detail.
I was impressed with the broad range of reports when I last looked at the tool and I still am. The cross-project Gantt lets you see summary information for every project in the portfolio. You can choose if you want to add a particular report to the dashboard and where it appears (not all the reports allow this, but many do).
Celoxis is a mature product that would suit a PMO looking for stable, enterprise-class project management software. It’s not the cheapest, but that’s because it has a ton of professional features making it suitable for large teams and significant projects and it is still competitively priced. You get what you pay for – a comprehensive system that would integrate with your other enterprise tools and help you manage, track and report on projects, programmes and portfolios.
It’s for ‘proper’ project management – and by that I mean it wouldn’t suit teams who are looking for a basic product to track tasks. If you want to up your game from that and use specialised project management software, then this would fit the bill.
What I liked most is that it’s a tool that lets you keep everything in one place, but without feeling like it’s preventing you from using anything else. I think many all-in-one tools try to host all the features that you could ever need, and they do it badly because they aren’t expert in everything. I haven’t used Celoxis enough to say it’s expert in everything, but it certainly offers you a lot in a logical way that doesn’t feel as if the extras are bolted on. It’s not trying to replace email or any of the other tools you might use day-to-day, but it does give you the choice to manage your projects in a joined up, transparent way.
The team at Celoxis got in touch to highlight some additional features for this review. Here’s what they had to say.
“I thought I’d give you some updates on our upcoming release and road map ahead.
Our next release is planned around mid-to-end September and will include the following:
- New Charts – Look and feel, animations, effects, interactive, toggling legends on bubble and grouped bar charts, improved rendering on mobile, etc…
- New Bolder menu and ability to brand accounts with own logo.
- Improved task and app collaboration – easily collaborate on work items such as tasks, bugs, issues, etc… through emails even without logging into the system Ability to download reports as PDF
Immediate Road map
Apart this we have a host of capabilities that are in the works on our immediate road map:
- Split tasks
- Scheduled report delivery
- Multiple dashboards
Looking back (Last year)
3 major releases (7.0, 8.0, 9.0) in 2014
Focus of each release:
v7.0: Focus on usability, productivity and enhanced browser support
- Browser support: latest browser support (IE 10 & 11)
- Usability and UI improvements: Trendy flat UI, see more of your data and less of our application and consistence and intuitive user experience
v8.0: Focus was on Custom Apps, Manual Scheduling, and Google drive integration. Other features were Automatic time sheet reminders, Pre-fill weekly time with working items, fill time on bugs, issues, etc…
- The custom apps is aligned with the shift that we are noticing in SMBs and enterprises moving away from best-in-class to a more ‘all-in-one’ and integrated software.
- Pre-built apps for most common workflows: Bugs, Issues, To Dos, Change Request, Risk Management, Project Approvals
- Date specific analysis to help in forecasting, trends and analytics
- Additional database support for On-Premise installation (MSSQL SERVER 2012, 2014, Oracle 11g & 12c)
v9.0: Focus on Resource Capacity Planning including real-time workloads and resource efficiency. Focus on Salesforce integration.
- Improved resource management capabilities have been an innovation, co-created with our customers feedback and inputs. Instead of showing a static load, the real-time view dynamically updates to show remaining effort based on current status.
- v9.1: Integration with QuickBooks Online and other performance/API improvements
Support and Professional Services
- 24×5 email based support
- Chat-based support for trial users.
- Enhanced help center to include more self-serve options such as 2-minute videos and FAQs, getting started guides, etc…
- Onboarding programs and training sessions conducted by PMP certified professionals.
- 1 hr complimentary Q&A with our project experts for new customers.”
Full disclosure: Celoxis have been one of my past blog sponsors this year, although this review was not paid for and represents my own views.
The previous post on Source Lines of Code, set off a firestorm from the proponents of #NoEstimates.
I''d rather not estimate than estimate with SLOC
or my favorite since we work in the flight avionics (command and data handling (C&DH) and guidance navigation and control (GN&C)), fire control systems, fault tolerant process control and the diagnostic coverage needed for process safety management, ground data and business process systems for both aircraft and spacecraft
I'm no longer going to fly with any company that counts LOC as (it) shows a lack of intelligence.
So the question is where and when are estimating the source lines of code useful for making business decisions?
Let me count the ways.
Embedded Software Intensive Systems
In the embedded systems business, memory is fixed, processor speed is hardwired and many times limited by thermal control process. Aircraft and spacecraft avionics bays have limited cooling, so getting a faster processes has repercussions beyond the cost of getting a faster processor. In an aircraft cooling must be added, increasing weight, possibly impacting the center of gravity. In a spacecraft, cooling is not done with fans and moving air. There is no air. Heat pipes and radiators are needed, again adding weight.
For those with experience in rapid development of small chunks of code the get released often to the customer for incremental use in the business process that then provide feedback for the next sliced piece of functionality being concerned about the center of gravity, thermal load, realtime critical path of the executing code so it maintains the realtime closed loop control algorithm so we don't crash into the end of the runway or onto the surface of a distance planet is probably not in their vocabulary.
Business and Processing Systems
For terrestrial systems, even business processing systems, the number of lines of code has direct impact on cost and schedule. Let's start with a source code security analyzer. Those whose skills are rapidly chunking out pieces of useful functionality aren't likely to be interested in running all their code through a security analyzer before even starting the compile and check out process.
A source code security analyzer examines source code to detect and report weaknesses that can lead to security vulnerabilities.
They are one of the last lines of defense to eliminate software vulnerabilities during development or after deployment. Like all things mission critical there is a Source Code Security Analysis Tool Functional Specification Version 1.1, NIST Special Publication 500-268, February 2011, http://samate.nist.gov/docs/source_code_security_analysis_spec_SP500-268_v1.1.pdf
Development and Product Maintenance
A recent hands on experience with the need to know the SLOC comes from a refactoring effort to remove all the reflection from a code base. Those note familiar with reflection it provides objects that describe assemblies, modules and types. Reflection dynamically creates an instance of a type, binds the type to an existing object, or gets the type from an existing object and invoke its methods or access its fields and properties. If you are using attributes in your code, reflection enables you to access them.
This is a cleaver way to build code in a rapidly changing requirements paradigm. A bite too cleaver
In larger production transaction processing systems, it's a way to crater the performance of the code based.
Removing all the reflection code structures has removed a huge percentage of the CPU time, memory requirements, database performance impacts - along with separating all the DB logic into Stored Procedures - resulting in the decommissioning of large chucks of the server farm running a very large public health application.
How long is it going to take to refactor all this code? I know, let's make an estimate by counting the lines of code. Do a few conversions from the current design (reflections), count how long that took. Divide the total lines of code (objects and their size) by that and we have an Estimate to Complete. Add some margin and we'll know approximately when the big pile of crappy code can get rid of the smell of running fat, slow, and error prone.
High Performance Embedded Mission Systems
High Performance Embedded Systems are found everywhere. Current estimates show they outnumber desktop and server systems 100 to 1. Most of these systems have ZERO defect goals. As well as ZERO tolerance for performance shortfalls, processing disruptions, and other reset conditions.
How do we have any sense of that the code base is capable of meeting these conditions? Testing of course is one way. But exhaustive testing is simply not possible. In a past life verification and validation of the code was the method - and still the method. Along with that is the cyclomatic complexity assessment of the code base. Another activity not likely to be of much interest to those producing the small chunks of sliced code to rapidly satisfy the customers emerging and possibly unknowable needs until they see it working.
So In The End
Unless we suspend the principles of Microeconomics and Managerial Finance when making management decisions in the presence of uncertainty, we're going to need to estimate. Unless we suspend the principles of probability and statistics when applied to networks of interrelated work, we're not going to be able to make decisions without making estimates.
In the four examples above, from direct hands on experience, Source Lines of Code are a good proxy for making estimates about cost and schedule, as well as the complexity of the code base when computing the inherent reliability and fault tolerance of the applications that are embedded in the software by which our daily lives depend on. From flight controls in aircraft, process control loops in everything under computer control, including the computers themselves, the assurance that the code we write is secure and will behave as needed.
If you hear some unsubstantiated claim that SLOC are not of any use in estimating further outcomes, ask have you ever worked on a system where failure is not an option? No, then may want to do some more homework.
In some "points of view" the notion of measuring software development parameters with Source Lines of Code is equivalent to the devil incarnate. This is of course another POV that makes little sense without understanding the domain and context. It's one of those irrationally held truths that has been passed down from on high by those NOT working in the domains where SLOC is a critical measure of project and system performance.
In the embedded real time systems domains - Software Intensive Systems - where the number of systems and related code base dominates by 100X the desk top and server side code base - the number of line of code in a systems is a direct measure of predicted cost and schedule, as well as predicted performance. Estimating in the presence of uncertainty for Software Intensive Systems is a critical success factor.
For some background on software in intensive systems...
The importance of embedded systems is undisputed. Their market size is about 100 times the desktop market. Hardly any new product reaches the market without embedded systems any more. The number of embedded systems in a product ranges from one to tens in consumer products and to hundreds in large professional systems. […] This will grow at least one order of magnitude in this decade. […] The strong increasing penetration of embedded systems in products and services creates huge opportunities for all kinds of enterprises and institutions. At the same time, the fast pace of penetration poses an immense threat for most of them. It concerns enterprises and institutions in such diverse areas as agriculture, health care, environment, road construction, security, mechanics, shipbuilding, medical appliances, language products, consumer electronics, etc. (Embedded Systems Design: The ARTIST Roadmap for Research and Development. ed. / Bruno Bouyssounouse; Joseph Sifakis. Berlin / Heidelberg : IEEE Computer Society Press, 2005. p. 72 (Lecture Notes in Computer Science, Vol. 3436).
There are some that are repelled by the notion of counting the lines of code or estimating the number of lines of code that may be needed to produce the needed capabilities. But that'd be because of the domain problem again.
Databases exist that correlate the SLOC with cost and schedule for business systems. (www.qsm.com)
So like it or not, consider it the devil incarnate or not, the numbers talk.
Predicting computer performance requirements for a completed system early in the design and development lifecycle of that system is challenging. Software requirements and avionic or hardware systems may times mature in parallel, and, in early stages of design, uncertainty of meeting the performance requirements makes determination of processing architecture difficult.
Later in the design process, as details finalized and prototypes can be developed, estimates of performance, cost, and schedule become increasingly accurate. If we wait till later in the lifecycle to make architectural changes causes, those changes are much more costly. These changes also result in schedule and technical risks.
The early performance needs are determined and the corresponding system architecture is established, the easier an appropriate computing platform (hardware and software) can be incorporated into the design.
A direct example I'm familiar with is NASA's Orion Crew Exploration Vehicle flight software. That approach uses available requirements documentation as a basis of the estimate and decouples input/output (I/O) and computation-based processing to estimate each separately then combine them to a final result.
This approach was unique since it was used to estimate the execution time for unwritten or partially specified software, in addition to giving a specific contribution for I/O. As well as estimating the time needed to develop the code and therefore the cost of that code. The method for estimating I/O processing performance was based on quantifying data, and the method for estimating algorithmic processing was based on approximated code size.
The result was used to predict processor types and quantities, allocate software to processors, predict communication bandwidth utilization, and manage processor margins. (Requirements-based execution time prediction of a partitioned real-time system using I/O and SLOC estimates, (Innovations in Systems and Software Engineering, Volume 8 Issue 4, December 2012, Pages 309-320)
Now is SLOC appropriate for you? Good question. Actually a theological question in some quarters, since the conveyed truth from the agile community is this is never an appropriate approach. Trouble is, research shows a direct correlation between the size of the software systems - both measured and estimated - and its cost and schedule.
Databases exist showing this and other parametric measures that used produce estimates very useful to both business and technical management. At the ICEAA 2014 conference I and a colleague presented our research paper showing how to apply Time Series Analysis (ARIMA) and Principle Component Analysis (PCA) for estimate future performance of projects, there was briefing on the databases available for making estimates of software intensive systems, here's a sample:
These some sources of reference classes for estimating cost and schedule for business and engineering systems. So what ever your thoughts - and likely biases - SLOC is a very useful production tool in many domains - business and embedded systems - with reference class databases, if you're willing to do the work to estimate the complexity of the code. If you claim it can't be done, then for you that's likely true.
Name: InLoox now!
Vendor: InLoox, GmbH
Hosting options: InLoox now! is the cloud version of InLoox. I was unable to test InLoox, the desktop version, as it required a version of Microsoft Outlook that I didn’t have (Apple users, take note).
- 39€ 1 month contract
- 35€ per month on a 6 month contract
- 30€ per month on a 12 month contract
Languages: English and German
Currency: Only USD as far as I can see. I’d be very surprised if there wasn’t a setting somewhere to change it to Euro given that this is a German product.
Basic features: starting a project
InLoox is a fully-featured project management tool that integrates with Outlook or works standalone. There are several different versions. I tried InLoox now!
It’s simple to get started with a new project. Click ‘new’ and then fill in the data to set up the project. Note down who the customer is, which team members will work on this project and what the current status is. You can also add custom fields and notes.
Under the Planning tab there are simple buttons to add tasks (called activities) and milestones.
Once you get tasks entered, a right click gives you a drop down menu that includes an edit option. You can make further changes there, including adding dependencies and resources.
I couldn’t see how to add a baseline, but the tool does have a Gantt chart view with a handy button to highlight the critical path.
Other features: reports and budgeting
You can create planning reports, time reports and budget reports. The budget feature is odd. You can create an expense but I couldn’t see how to add a price to the expense. My colleague took a look and couldn’t work it out either. You surely must be able to add a cost to a resource to calculate the cost of time spent on the project as well as add expense amounts to other items such as train tickets. Otherwise what’s the point of having so many different budget features and reporting options?
To be honest we lost interest in trying to work it out after 20 minutes of random clicking.
Support and help
We probably should have asked the support team. They were very fast with their response when I couldn’t get the desktop version installed and also followed up on Twitter.
The help feature is what you’d expect: a standard searchable help but it’s only useful if the topic you want is actually covered. The Support Centre has community forums and online tutorials so you’ve got that backup as well.
The main issue was that the screenshots in the help feature don’t look like the online version and are probably of the desktop version.
It looks fine – not beautiful, but functional and with so many features to pack in it can’t have the clean user interface of some of the other tools I’ve been testing this month.
However, the verdict in this office was that it was not user-friendly from the outset. It was hard to switch between screens and awkward to navigate between the pages in the ‘sections’, ie where you choose planning, budgets, or to manage your page. There is no quick navigation to from one to the other.
To give the software the benefit of the doubt, as with anything the more you use it, the easier it becomes.
What about collaboration features?
Email notifications can be set up or switched off if you like, which is always a good feature. You can also attach documents to tasks which always helps with collaboration. I didn’t try the app version, but it’s good to know that I could get it on my tablet if I wanted.
I didn’t find it intuitive but perhaps that’s impossible for a tool designed to run large projects across multiple teams. It certainly had the features I would look for in a professional project management tool (budgets aside… I assume that was user error, but check it out for yourself if financial management from within your software is something important to you).
I wasn’t blown away by it but it does the job, seems reasonably priced and has a good pedigree. A solid choice for the modern project management team.
If you are a software engineer or other technie who has suddenly found yourself responsible for managing a team and IT projects then you wouldn’t be the first.
It’s common for IT professionals of all disciplines to take the step into a role where their neck is on the line for managing a software project through to completion.
The skills required for managing developers and successfully delivering a project are not the same as the programming and development skills required to produce good components and code. Growing Software: Proven Strategies for Managing Software Engineers* aims to breach the knowledge gap for software engineers who receive that promotion.
I read this book a few years ago but it remains one of my all-time favourites and the one I often recommend to new IT project managers, even those who don’t come from a development background because it helps explain how those teams and individuals work. It’s a good primer on communicating with technical people as well.
Setting the scene for software teams
The book is split into five parts, beginning with the environment of a software development team and what it means to create and grow a good team.
Part Two looks at the technology aspects of the management role: defining a product, managing releases and the evaluation and assessment processes required to turn out good quality code. There is an excellent section on prototyping here.
Part Three considers how the engineering function fits into the wider organisation, and there is some good advice here on working successfully with other departments. Testa’s approach to software development is holistic, in that he advocates involving the end users and a wide group of stakeholders as much as possible.
Advice for the whole project and software lifecycle
The last two parts of the book cover what the software development manager is likely to need for the long term, not just for getting the first project off the ground. Part Four looks at the processes that make up software development and they are covered in some detail. The text reads like it is aimed at smaller development companies and start ups, as it provides advice for the creation of a software-development methodology. However, even if this is not relevant for some readers, there is benefit to be gained in the assessment and review of existing processes.
The final section provides pointers on creating a software strategy, technology overhauls and roadmaps for taking the team and the company forward. Again, this reads like it is aimed at smaller companies, but much of it will still be of use to software-development managers working in larger organisations.
From the beginning of the book you can tell that it is designed to be a practical, grounded book. The style is realistic, and it explains how the internal politics of an organisation actually work. There are also hints about establishing the company culture – essential in a new role – and deciding if it is for you. Growing Software also includes appendices and I found Appendix B, about internationalisation, particularly interesting as it covers all types of questions and guidance for converting software for use in other markets. This underlines for me the major premise of the book: it is very commercial and is designed to be of as much practical use to the software manager as possible to ensure positive company results as well as quality software outcomes.
Review updated September 2015. A version of this review was first published in The Computer Journal, 2009, and on this blog in 2010.
*This article contains affiliate links at no cost to you.
Probability and statistics are a core business process for decision making in the presence uncertainty. Uncertainty comes in two types - Irreducible and Reducible.
Making decisions in the presence of these two types of uncertainty requires making estimates about outcomes in the presence of the risks created by the uncertainties.
All decisions involve uncertainty, risk, and trade-offs. This is an immutable principle of all business and technical processes in the presence of uncertainty.
Successful management of software project cost within the limited budget is an important concern in any business. Lack of information and reliable tools that support estimating process make it difficult to initiate estimating report during early project planning stages. To control the cost to an acceptable level, requires appropriate and accurate measurement of various project related variables and the understanding of the magnitude of their effects.
The importance of early estimating to those funding the project or those providing capital to fund products cannot be over emphasized.
Making cost estimates with Bayesian decision processes is a well developed discipline. Here's a recent paper from a colleague in NASA, Christian Smart
The risk created by these uncertainties are always present. If unaddressed, our project is at risk of failure.
In the end risk management is about estimating the impacts of reducible and irreducible uncertainty.
As Tim Lister says - Risk Management is How Adults Manage Projects
Some More Resources of Bayesian Decision Making
- The use of Bayes and causal modelling in decision making, uncertainty and risk, Norman Fenton and Martin Neil
- Decision support software for probabilistic risk assessment using Bayesian networks, Norman Fenton and Martin Neil
- Software Project Level Estimation Model Framework based on Bayesian Belief Networks, Hao Wang Siemens Ltd. China CT SE Beijing, China, Fei Peng Siemens Ltd. China CT SE Beijing, China, Chao Zhang Siemens Ltd. China CT SE Beijing, China, Andrej Pietschker Siemens AG CT SE 1 Munich, Germany
- Decision Making in the Presence of Uncertainty, Milos Hauskrecht
- Managerial Decision Making Under Risk and Uncertainty, Ari Riabacke, IAENG International Journal of Computer Science, 32:4, IJCS_32_4_12
- Decision Making Under Uncertainty Think Clearly – Act Decisively – Feel Confident Think clearly, act decisively and feel confident – Unilever’s story, Sven Roden
- Supporting Early Decision-Making in the Presence of Uncertainty, Jennifer Horkoff , Rick Salay, Marsha Chechik , and Alessio Di Sandro
- Risk-informed decision-making in the presence of epistemic uncertainty, Didier Dubois, Dominique Guyonnet, International Journal of General Systems, Taylor & Francis, 2011, 40 (2), pp.145-167.
Been on the road for two weeks straight. At client for a week, at VMWorld for a few days, back at client site. During this time, primary work is on deciding how to move existing platform and augmented software systems forward using the Accelerator paradigm.
Those not familiar with Accelerator, they are fixed-term, cohort-based programs, that include mentorship and educational components and culminate in a public pitch event or demo day.
Money is given to the cohort members ($25,000 to $100,000), mentors provide intensive advice to the members over an 8 to 12 week period, in exchange for a percent of future equity. At the end of the cycle, the software products that result are further funded usually through venture capital, in support of the product strategy of the firm. At this client, we're doing this to expand the code base in a rapid manner to respond to rapid market needs, which are beyond our current capacity to meet in a timely manner.
The business of funding other people to produce value of the firm mandates making decisions in the presence of uncertainty. This is everyday, normal business management. All business's do this. We team with other businesses, we provide funds diectly, others provide funds, we put out a challenge to have those applying for the funds to provide something of value to our portfolio needs. These challenges are in support of our mission. Once selected the Cohort members participate in workshops, mentoring, coaching, architectural assessments, and other standard software development processes.
But there is always uncertainty. This uncertainty is around the knowledge we need to make decisions.Questions like how much can be developed in the allotted time for the allotted money? How much effort will it take to arrive at a needed set of capabilities that can meet our needs? How much testing will be needed to confirm the produced software will properly function with the existing portfolio of capabilities?
The answers to these and hundred's of other questions involve uncertainty, risks, and tradeoffs. A rational decision making framework used to answer these questions involves estimating nearly everything before working examples are present. These estimates are based on experience, assessment, reference classes, models using metrics and measurements, some measured data, but mostly experience from the past tested in a model.
No credible decision making can be performed without estimating the impact of the decision on the future performance of our efforts.
This is so core to all business management, it has some names. Managerial Finance and Microeconomics of Software Development. The main objective of these efforts is to improve the probability of success by learning from the metrics of past efforts mapped to the current effort.
These models do not involve a single causal explanation. Instead they combine statistical inference from available data - objective factors - and other subjective factors. In all cases estimates are the basis of the decision making process. These causal relationships are themselves uncertain in their connectivity and influence.
One approach to this problem is the application of Bayes Networks, which are models using probabilistic Directed Acyclic Graphic (DAG) that represent a set of random variables and their conditional dependencies on each other. Bayes Theorem provides a rational means of updating our belief in some unknown hypothesis in light of new or additional evidence - observed outcomes or metrics.
So in the end it comes done to this simple and yet powerful observation
Managerial Finance is based on making decisions in the presence of uncertainty. In order to make these decision the needed information must be estimated in many cases.
Any one conjecturing that decisions can be made in the presence of the normal uncertainty (reducible and irreducible) of business in the absence of estimates of the outcomes of those decisions is willfully ignoring the core principles of the business decision support paradigm.